1
0
Fork 0
mirror of https://github.com/ansible-collections/community.general.git synced 2024-09-14 20:13:21 +02:00

[9.0.0] Remove deprecated modules and features (#8198)

* Remove deprecated modules.

* Update BOTMETA.

* Update ignore.txt files.

* Bump collection version to 9.0.0.

* Change timeout from 10 to 60.

* Remove the alias autosubscribe of auto_attach.

* Change default of mode from compatibility to new.

* Remove deprecated classes.

* Remove mh.mixins.deps.DependencyMixin.

* Remove flowdock module.

* Remove proxmox_default_behavior option.

* Remove ack_* options.

* Remove deprecated command support.

* Change virtualenv behavior.

* Fix changelog.

* Remove imports of deprecated (and now removed) code.

* Fix tests.

* Fix sanity tests.

* Require Django 4.1.

* Use V() instead of C() for values.

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

* django_manage: improve docs for release 9.0.0

* markup

* fix doc notes in cpanm

---------

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Alexei Znamensky <russoz@gmail.com>
This commit is contained in:
Felix Fontein 2024-04-22 18:28:22 +02:00 committed by GitHub
parent 17e11d7d7e
commit be3b66c8b5
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
59 changed files with 143 additions and 9719 deletions

46
.github/BOTMETA.yml vendored
View file

@ -532,8 +532,6 @@ files:
maintainers: $team_flatpak maintainers: $team_flatpak
$modules/flatpak_remote.py: $modules/flatpak_remote.py:
maintainers: $team_flatpak maintainers: $team_flatpak
$modules/flowdock.py:
ignore: mcodd
$modules/gandi_livedns.py: $modules/gandi_livedns.py:
maintainers: gthiemonge maintainers: gthiemonge
$modules/gconftool2.py: $modules/gconftool2.py:
@ -1096,46 +1094,6 @@ files:
$modules/python_requirements_info.py: $modules/python_requirements_info.py:
ignore: ryansb ignore: ryansb
maintainers: willthames maintainers: willthames
$modules/rax:
ignore: ryansb sivel
$modules/rax.py:
maintainers: omgjlk sivel
$modules/rax_cbs.py:
maintainers: claco
$modules/rax_cbs_attachments.py:
maintainers: claco
$modules/rax_cdb.py:
maintainers: jails
$modules/rax_cdb_database.py:
maintainers: jails
$modules/rax_cdb_user.py:
maintainers: jails
$modules/rax_clb.py:
maintainers: claco
$modules/rax_clb_nodes.py:
maintainers: neuroid
$modules/rax_clb_ssl.py:
maintainers: smashwilson
$modules/rax_files.py:
maintainers: angstwad
$modules/rax_files_objects.py:
maintainers: angstwad
$modules/rax_identity.py:
maintainers: claco
$modules/rax_mon_alarm.py:
maintainers: smashwilson
$modules/rax_mon_check.py:
maintainers: smashwilson
$modules/rax_mon_entity.py:
maintainers: smashwilson
$modules/rax_mon_notification.py:
maintainers: smashwilson
$modules/rax_mon_notification_plan.py:
maintainers: smashwilson
$modules/rax_network.py:
maintainers: claco omgjlk
$modules/rax_queue.py:
maintainers: claco
$modules/read_csv.py: $modules/read_csv.py:
maintainers: dagwieers maintainers: dagwieers
$modules/redfish_: $modules/redfish_:
@ -1300,8 +1258,6 @@ files:
maintainers: farhan7500 gautamphegde maintainers: farhan7500 gautamphegde
$modules/ssh_config.py: $modules/ssh_config.py:
maintainers: gaqzi Akasurde maintainers: gaqzi Akasurde
$modules/stackdriver.py:
maintainers: bwhaley
$modules/stacki_host.py: $modules/stacki_host.py:
labels: stacki_host labels: stacki_host
maintainers: bsanders bbyhuy maintainers: bsanders bbyhuy
@ -1394,8 +1350,6 @@ files:
maintainers: $team_wdc maintainers: $team_wdc
$modules/wdc_redfish_info.py: $modules/wdc_redfish_info.py:
maintainers: $team_wdc maintainers: $team_wdc
$modules/webfaction_:
maintainers: quentinsf
$modules/xattr.py: $modules/xattr.py:
labels: xattr labels: xattr
maintainers: bcoca maintainers: bcoca

View file

@ -0,0 +1,18 @@
removed_features:
- "rax* modules, rax module utils, rax docs fragment - the Rackspace modules relied on the deprecated package ``pyrax`` and were thus removed (https://github.com/ansible-collections/community.general/pull/8198)."
- "stackdriver - this module relied on HTTPS APIs that do not exist anymore and was thus removed (https://github.com/ansible-collections/community.general/pull/8198)."
- "webfaction_* modules - these modules relied on HTTPS APIs that do not exist anymore and were thus removed (https://github.com/ansible-collections/community.general/pull/8198)."
- "flowdock - this module relied on HTTPS APIs that do not exist anymore and was thus removed (https://github.com/ansible-collections/community.general/pull/8198)."
- "redhat_subscription - the alias ``autosubscribe`` of the ``auto_attach`` option was removed (https://github.com/ansible-collections/community.general/pull/8198)."
- "redhat module utils - the classes ``Rhsm``, ``RhsmPool``, and ``RhsmPools`` have been removed (https://github.com/ansible-collections/community.general/pull/8198)."
- "mh.mixins.deps module utils - the ``DependencyMixin`` has been removed. Use the ``deps`` module utils instead (https://github.com/ansible-collections/community.general/pull/8198)."
- "proxmox - the ``proxmox_default_behavior`` option has been removed (https://github.com/ansible-collections/community.general/pull/8198)."
- "ansible_galaxy_install - the ``ack_ansible29`` and ``ack_min_ansiblecore211`` options have been removed. They no longer had any effect (https://github.com/ansible-collections/community.general/pull/8198)."
- "django_manage - support for the ``command`` values ``cleanup``, ``syncdb``, and ``validate`` were removed. Use ``clearsessions``, ``migrate``, and ``check`` instead, respectively (https://github.com/ansible-collections/community.general/pull/8198)."
deprecated_features:
- "django_manage - the ``ack_venv_creation_deprecation`` option has no more effect and will be removed from community.general 11.0.0 (https://github.com/ansible-collections/community.general/pull/8198)."
breaking_changes:
- "redfish_command, redfish_config, redfish_info - change the default for ``timeout`` from 10 to 60 (https://github.com/ansible-collections/community.general/pull/8198)."
- "cpanm - the default of the ``mode`` option changed from ``compatibility`` to ``new`` (https://github.com/ansible-collections/community.general/pull/8198)."
- "django_manage - the module will now fail if ``virtualenv`` is specified but no virtual environment exists at that location (https://github.com/ansible-collections/community.general/pull/8198)."
- "django_manage - the module now requires Django >= 4.1 (https://github.com/ansible-collections/community.general/pull/8198)."

View file

@ -5,7 +5,7 @@
namespace: community namespace: community
name: general name: general
version: 8.6.0 version: 9.0.0
readme: README.md readme: README.md
authors: authors:
- Ansible (https://github.com/ansible) - Ansible (https://github.com/ansible)

View file

@ -57,109 +57,109 @@ plugin_routing:
removal_version: 10.0.0 removal_version: 10.0.0
warning_text: Use community.general.consul_token and/or community.general.consul_policy instead. warning_text: Use community.general.consul_token and/or community.general.consul_policy instead.
rax_cbs_attachments: rax_cbs_attachments:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on the deprecated package pyrax. warning_text: This module relied on the deprecated package pyrax.
rax_cbs: rax_cbs:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on the deprecated package pyrax. warning_text: This module relied on the deprecated package pyrax.
rax_cdb_database: rax_cdb_database:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on the deprecated package pyrax. warning_text: This module relied on the deprecated package pyrax.
rax_cdb_user: rax_cdb_user:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on the deprecated package pyrax. warning_text: This module relied on the deprecated package pyrax.
rax_cdb: rax_cdb:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on the deprecated package pyrax. warning_text: This module relied on the deprecated package pyrax.
rax_clb_nodes: rax_clb_nodes:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on the deprecated package pyrax. warning_text: This module relied on the deprecated package pyrax.
rax_clb_ssl: rax_clb_ssl:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on the deprecated package pyrax. warning_text: This module relied on the deprecated package pyrax.
rax_clb: rax_clb:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on the deprecated package pyrax. warning_text: This module relied on the deprecated package pyrax.
rax_dns_record: rax_dns_record:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on the deprecated package pyrax. warning_text: This module relied on the deprecated package pyrax.
rax_dns: rax_dns:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on the deprecated package pyrax. warning_text: This module relied on the deprecated package pyrax.
rax_facts: rax_facts:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on the deprecated package pyrax. warning_text: This module relied on the deprecated package pyrax.
rax_files_objects: rax_files_objects:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on the deprecated package pyrax. warning_text: This module relied on the deprecated package pyrax.
rax_files: rax_files:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on the deprecated package pyrax. warning_text: This module relied on the deprecated package pyrax.
rax_identity: rax_identity:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on the deprecated package pyrax. warning_text: This module relied on the deprecated package pyrax.
rax_keypair: rax_keypair:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on the deprecated package pyrax. warning_text: This module relied on the deprecated package pyrax.
rax_meta: rax_meta:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on the deprecated package pyrax. warning_text: This module relied on the deprecated package pyrax.
rax_mon_alarm: rax_mon_alarm:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on the deprecated package pyrax. warning_text: This module relied on the deprecated package pyrax.
rax: rax:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on the deprecated package pyrax. warning_text: This module relied on the deprecated package pyrax.
rax_mon_check: rax_mon_check:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on the deprecated package pyrax. warning_text: This module relied on the deprecated package pyrax.
rax_mon_entity: rax_mon_entity:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on the deprecated package pyrax. warning_text: This module relied on the deprecated package pyrax.
rax_mon_notification_plan: rax_mon_notification_plan:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on the deprecated package pyrax. warning_text: This module relied on the deprecated package pyrax.
rax_mon_notification: rax_mon_notification:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on the deprecated package pyrax. warning_text: This module relied on the deprecated package pyrax.
rax_network: rax_network:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on the deprecated package pyrax. warning_text: This module relied on the deprecated package pyrax.
rax_queue: rax_queue:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on the deprecated package pyrax. warning_text: This module relied on the deprecated package pyrax.
rax_scaling_group: rax_scaling_group:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on the deprecated package pyrax. warning_text: This module relied on the deprecated package pyrax.
rax_scaling_policy: rax_scaling_policy:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on the deprecated package pyrax. warning_text: This module relied on the deprecated package pyrax.
rhn_channel: rhn_channel:
deprecation: deprecation:
removal_version: 10.0.0 removal_version: 10.0.0
@ -171,9 +171,9 @@ plugin_routing:
warning_text: RHN is EOL, please contact the community.general maintainers warning_text: RHN is EOL, please contact the community.general maintainers
if still using this; see the module documentation for more details. if still using this; see the module documentation for more details.
stackdriver: stackdriver:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on HTTPS APIs that do not exist anymore, warning_text: This module relied on HTTPS APIs that do not exist anymore,
and any new development in the direction of providing an alternative should and any new development in the direction of providing an alternative should
happen in the context of the google.cloud collection. happen in the context of the google.cloud collection.
ali_instance_facts: ali_instance_facts:
@ -237,9 +237,9 @@ plugin_routing:
docker_volume_info: docker_volume_info:
redirect: community.docker.docker_volume_info redirect: community.docker.docker_volume_info
flowdock: flowdock:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on HTTPS APIs that do not exist anymore and warning_text: This module relied on HTTPS APIs that do not exist anymore and
there is no clear path to update. there is no clear path to update.
foreman: foreman:
tombstone: tombstone:
@ -727,29 +727,29 @@ plugin_routing:
removal_version: 3.0.0 removal_version: 3.0.0
warning_text: Use community.general.vertica_info instead. warning_text: Use community.general.vertica_info instead.
webfaction_app: webfaction_app:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on HTTPS APIs that do not exist anymore and warning_text: This module relied on HTTPS APIs that do not exist anymore and
there is no clear path to update. there is no clear path to update.
webfaction_db: webfaction_db:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on HTTPS APIs that do not exist anymore and warning_text: This module relied on HTTPS APIs that do not exist anymore and
there is no clear path to update. there is no clear path to update.
webfaction_domain: webfaction_domain:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on HTTPS APIs that do not exist anymore and warning_text: This module relied on HTTPS APIs that do not exist anymore and
there is no clear path to update. there is no clear path to update.
webfaction_mailbox: webfaction_mailbox:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on HTTPS APIs that do not exist anymore and warning_text: This module relied on HTTPS APIs that do not exist anymore and
there is no clear path to update. there is no clear path to update.
webfaction_site: webfaction_site:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module relies on HTTPS APIs that do not exist anymore and warning_text: This module relied on HTTPS APIs that do not exist anymore and
there is no clear path to update. there is no clear path to update.
xenserver_guest_facts: xenserver_guest_facts:
tombstone: tombstone:
@ -757,9 +757,9 @@ plugin_routing:
warning_text: Use community.general.xenserver_guest_info instead. warning_text: Use community.general.xenserver_guest_info instead.
doc_fragments: doc_fragments:
rackspace: rackspace:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This doc fragment is used by rax modules, that rely on the deprecated warning_text: This doc fragment was used by rax modules, that relied on the deprecated
package pyrax. package pyrax.
_gcp: _gcp:
redirect: community.google._gcp redirect: community.google._gcp
@ -777,9 +777,9 @@ plugin_routing:
redirect: community.postgresql.postgresql redirect: community.postgresql.postgresql
module_utils: module_utils:
rax: rax:
deprecation: tombstone:
removal_version: 9.0.0 removal_version: 9.0.0
warning_text: This module util relies on the deprecated package pyrax. warning_text: This module util relied on the deprecated package pyrax.
docker.common: docker.common:
redirect: community.docker.common redirect: community.docker.common
docker.swarm: docker.swarm:

View file

@ -1,120 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2014, Matt Martz <matt@sivel.net>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
class ModuleDocFragment(object):
# Standard Rackspace only documentation fragment
DOCUMENTATION = r'''
options:
api_key:
description:
- Rackspace API key, overrides O(credentials).
type: str
aliases: [ password ]
credentials:
description:
- File to find the Rackspace credentials in. Ignored if O(api_key) and
O(username) are provided.
type: path
aliases: [ creds_file ]
env:
description:
- Environment as configured in C(~/.pyrax.cfg),
see U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#pyrax-configuration).
type: str
region:
description:
- Region to create an instance in.
type: str
username:
description:
- Rackspace username, overrides O(credentials).
type: str
validate_certs:
description:
- Whether or not to require SSL validation of API endpoints.
type: bool
aliases: [ verify_ssl ]
requirements:
- pyrax
notes:
- The following environment variables can be used, E(RAX_USERNAME),
E(RAX_API_KEY), E(RAX_CREDS_FILE), E(RAX_CREDENTIALS), E(RAX_REGION).
- E(RAX_CREDENTIALS) and E(RAX_CREDS_FILE) point to a credentials file
appropriate for pyrax. See U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating).
- E(RAX_USERNAME) and E(RAX_API_KEY) obviate the use of a credentials file.
- E(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...).
'''
# Documentation fragment including attributes to enable communication
# of other OpenStack clouds. Not all rax modules support this.
OPENSTACK = r'''
options:
api_key:
type: str
description:
- Rackspace API key, overrides O(credentials).
aliases: [ password ]
auth_endpoint:
type: str
description:
- The URI of the authentication service.
- If not specified will be set to U(https://identity.api.rackspacecloud.com/v2.0/).
credentials:
type: path
description:
- File to find the Rackspace credentials in. Ignored if O(api_key) and
O(username) are provided.
aliases: [ creds_file ]
env:
type: str
description:
- Environment as configured in C(~/.pyrax.cfg),
see U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#pyrax-configuration).
identity_type:
type: str
description:
- Authentication mechanism to use, such as rackspace or keystone.
default: rackspace
region:
type: str
description:
- Region to create an instance in.
tenant_id:
type: str
description:
- The tenant ID used for authentication.
tenant_name:
type: str
description:
- The tenant name used for authentication.
username:
type: str
description:
- Rackspace username, overrides O(credentials).
validate_certs:
description:
- Whether or not to require SSL validation of API endpoints.
type: bool
aliases: [ verify_ssl ]
deprecated:
removed_in: 9.0.0
why: This module relies on the deprecated package pyrax.
alternative: Use the Openstack modules instead.
requirements:
- pyrax
notes:
- The following environment variables can be used, E(RAX_USERNAME),
E(RAX_API_KEY), E(RAX_CREDS_FILE), E(RAX_CREDENTIALS), E(RAX_REGION).
- E(RAX_CREDENTIALS) and E(RAX_CREDS_FILE) points to a credentials file
appropriate for pyrax. See U(https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating).
- E(RAX_USERNAME) and E(RAX_API_KEY) obviate the use of a credentials file.
- E(RAX_REGION) defines a Rackspace Public Cloud region (DFW, ORD, LON, ...).
'''

View file

@ -7,11 +7,6 @@
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
__metaclass__ = type __metaclass__ = type
import traceback
from ansible_collections.community.general.plugins.module_utils.mh.base import ModuleHelperBase
from ansible_collections.community.general.plugins.module_utils.mh.deco import module_fails_on_exception
class DependencyCtxMgr(object): class DependencyCtxMgr(object):
def __init__(self, name, msg=None): def __init__(self, name, msg=None):
@ -35,39 +30,3 @@ class DependencyCtxMgr(object):
@property @property
def text(self): def text(self):
return self.msg or str(self.exc_val) return self.msg or str(self.exc_val)
class DependencyMixin(ModuleHelperBase):
"""
THIS CLASS IS BEING DEPRECATED.
See the deprecation notice in ``DependencyMixin.fail_on_missing_deps()`` below.
Mixin for mapping module options to running a CLI command with its arguments.
"""
_dependencies = []
@classmethod
def dependency(cls, name, msg):
cls._dependencies.append(DependencyCtxMgr(name, msg))
return cls._dependencies[-1]
def fail_on_missing_deps(self):
if not self._dependencies:
return
self.module.deprecate(
'The DependencyMixin is being deprecated. '
'Modules should use community.general.plugins.module_utils.deps instead.',
version='9.0.0',
collection_name='community.general',
)
for d in self._dependencies:
if not d.has_it:
self.module.fail_json(changed=False,
exception="\n".join(traceback.format_exception(d.exc_type, d.exc_val, d.exc_tb)),
msg=d.text,
**self.output)
@module_fails_on_exception
def run(self):
self.fail_on_missing_deps()
super(DependencyMixin, self).run()

View file

@ -13,12 +13,11 @@ from ansible.module_utils.common.dict_transformations import dict_merge
# (TODO: remove AnsibleModule!) pylint: disable-next=unused-import # (TODO: remove AnsibleModule!) pylint: disable-next=unused-import
from ansible_collections.community.general.plugins.module_utils.mh.base import ModuleHelperBase, AnsibleModule # noqa: F401 from ansible_collections.community.general.plugins.module_utils.mh.base import ModuleHelperBase, AnsibleModule # noqa: F401
from ansible_collections.community.general.plugins.module_utils.mh.mixins.state import StateMixin from ansible_collections.community.general.plugins.module_utils.mh.mixins.state import StateMixin
from ansible_collections.community.general.plugins.module_utils.mh.mixins.deps import DependencyMixin
from ansible_collections.community.general.plugins.module_utils.mh.mixins.vars import VarsMixin from ansible_collections.community.general.plugins.module_utils.mh.mixins.vars import VarsMixin
from ansible_collections.community.general.plugins.module_utils.mh.mixins.deprecate_attrs import DeprecateAttrsMixin from ansible_collections.community.general.plugins.module_utils.mh.mixins.deprecate_attrs import DeprecateAttrsMixin
class ModuleHelper(DeprecateAttrsMixin, VarsMixin, DependencyMixin, ModuleHelperBase): class ModuleHelper(DeprecateAttrsMixin, VarsMixin, ModuleHelperBase):
facts_name = None facts_name = None
output_params = () output_params = ()
diff_params = () diff_params = ()

View file

@ -14,7 +14,7 @@ from ansible_collections.community.general.plugins.module_utils.mh.module_helper
ModuleHelper, StateModuleHelper, AnsibleModule ModuleHelper, StateModuleHelper, AnsibleModule
) )
from ansible_collections.community.general.plugins.module_utils.mh.mixins.state import StateMixin # noqa: F401 from ansible_collections.community.general.plugins.module_utils.mh.mixins.state import StateMixin # noqa: F401
from ansible_collections.community.general.plugins.module_utils.mh.mixins.deps import DependencyCtxMgr, DependencyMixin # noqa: F401 from ansible_collections.community.general.plugins.module_utils.mh.mixins.deps import DependencyCtxMgr # noqa: F401
from ansible_collections.community.general.plugins.module_utils.mh.exceptions import ModuleHelperException # noqa: F401 from ansible_collections.community.general.plugins.module_utils.mh.exceptions import ModuleHelperException # noqa: F401
from ansible_collections.community.general.plugins.module_utils.mh.deco import ( from ansible_collections.community.general.plugins.module_utils.mh.deco import (
cause_changes, module_fails_on_exception, check_mode_skip, check_mode_skip_returns, cause_changes, module_fails_on_exception, check_mode_skip, check_mode_skip_returns,

View file

@ -1,334 +0,0 @@
# -*- coding: utf-8 -*-
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by
# Ansible still belong to the author of the module, and may assign their own
# license to the complete work.
#
# Copyright (c), Michael DeHaan <michael.dehaan@gmail.com>, 2012-2013
#
# Simplified BSD License (see LICENSES/BSD-2-Clause.txt or https://opensource.org/licenses/BSD-2-Clause)
# SPDX-License-Identifier: BSD-2-Clause
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import re
from uuid import UUID
from ansible.module_utils.six import text_type, binary_type
FINAL_STATUSES = ('ACTIVE', 'ERROR')
VOLUME_STATUS = ('available', 'attaching', 'creating', 'deleting', 'in-use',
'error', 'error_deleting')
CLB_ALGORITHMS = ['RANDOM', 'LEAST_CONNECTIONS', 'ROUND_ROBIN',
'WEIGHTED_LEAST_CONNECTIONS', 'WEIGHTED_ROUND_ROBIN']
CLB_PROTOCOLS = ['DNS_TCP', 'DNS_UDP', 'FTP', 'HTTP', 'HTTPS', 'IMAPS',
'IMAPv4', 'LDAP', 'LDAPS', 'MYSQL', 'POP3', 'POP3S', 'SMTP',
'TCP', 'TCP_CLIENT_FIRST', 'UDP', 'UDP_STREAM', 'SFTP']
NON_CALLABLES = (text_type, binary_type, bool, dict, int, list, type(None))
PUBLIC_NET_ID = "00000000-0000-0000-0000-000000000000"
SERVICE_NET_ID = "11111111-1111-1111-1111-111111111111"
def rax_slugify(value):
"""Prepend a key with rax_ and normalize the key name"""
return 'rax_%s' % (re.sub(r'[^\w-]', '_', value).lower().lstrip('_'))
def rax_clb_node_to_dict(obj):
"""Function to convert a CLB Node object to a dict"""
if not obj:
return {}
node = obj.to_dict()
node['id'] = obj.id
node['weight'] = obj.weight
return node
def rax_to_dict(obj, obj_type='standard'):
"""Generic function to convert a pyrax object to a dict
obj_type values:
standard
clb
server
"""
instance = {}
for key in dir(obj):
value = getattr(obj, key)
if obj_type == 'clb' and key == 'nodes':
instance[key] = []
for node in value:
instance[key].append(rax_clb_node_to_dict(node))
elif (isinstance(value, list) and len(value) > 0 and
not isinstance(value[0], NON_CALLABLES)):
instance[key] = []
for item in value:
instance[key].append(rax_to_dict(item))
elif (isinstance(value, NON_CALLABLES) and not key.startswith('_')):
if obj_type == 'server':
if key == 'image':
if not value:
instance['rax_boot_source'] = 'volume'
else:
instance['rax_boot_source'] = 'local'
key = rax_slugify(key)
instance[key] = value
if obj_type == 'server':
for attr in ['id', 'accessIPv4', 'name', 'status']:
instance[attr] = instance.get(rax_slugify(attr))
return instance
def rax_find_bootable_volume(module, rax_module, server, exit=True):
"""Find a servers bootable volume"""
cs = rax_module.cloudservers
cbs = rax_module.cloud_blockstorage
server_id = rax_module.utils.get_id(server)
volumes = cs.volumes.get_server_volumes(server_id)
bootable_volumes = []
for volume in volumes:
vol = cbs.get(volume)
if module.boolean(vol.bootable):
bootable_volumes.append(vol)
if not bootable_volumes:
if exit:
module.fail_json(msg='No bootable volumes could be found for '
'server %s' % server_id)
else:
return False
elif len(bootable_volumes) > 1:
if exit:
module.fail_json(msg='Multiple bootable volumes found for server '
'%s' % server_id)
else:
return False
return bootable_volumes[0]
def rax_find_image(module, rax_module, image, exit=True):
"""Find a server image by ID or Name"""
cs = rax_module.cloudservers
try:
UUID(image)
except ValueError:
try:
image = cs.images.find(human_id=image)
except (cs.exceptions.NotFound, cs.exceptions.NoUniqueMatch):
try:
image = cs.images.find(name=image)
except (cs.exceptions.NotFound,
cs.exceptions.NoUniqueMatch):
if exit:
module.fail_json(msg='No matching image found (%s)' %
image)
else:
return False
return rax_module.utils.get_id(image)
def rax_find_volume(module, rax_module, name):
"""Find a Block storage volume by ID or name"""
cbs = rax_module.cloud_blockstorage
try:
UUID(name)
volume = cbs.get(name)
except ValueError:
try:
volume = cbs.find(name=name)
except rax_module.exc.NotFound:
volume = None
except Exception as e:
module.fail_json(msg='%s' % e)
return volume
def rax_find_network(module, rax_module, network):
"""Find a cloud network by ID or name"""
cnw = rax_module.cloud_networks
try:
UUID(network)
except ValueError:
if network.lower() == 'public':
return cnw.get_server_networks(PUBLIC_NET_ID)
elif network.lower() == 'private':
return cnw.get_server_networks(SERVICE_NET_ID)
else:
try:
network_obj = cnw.find_network_by_label(network)
except (rax_module.exceptions.NetworkNotFound,
rax_module.exceptions.NetworkLabelNotUnique):
module.fail_json(msg='No matching network found (%s)' %
network)
else:
return cnw.get_server_networks(network_obj)
else:
return cnw.get_server_networks(network)
def rax_find_server(module, rax_module, server):
"""Find a Cloud Server by ID or name"""
cs = rax_module.cloudservers
try:
UUID(server)
server = cs.servers.get(server)
except ValueError:
servers = cs.servers.list(search_opts=dict(name='^%s$' % server))
if not servers:
module.fail_json(msg='No Server was matched by name, '
'try using the Server ID instead')
if len(servers) > 1:
module.fail_json(msg='Multiple servers matched by name, '
'try using the Server ID instead')
# We made it this far, grab the first and hopefully only server
# in the list
server = servers[0]
return server
def rax_find_loadbalancer(module, rax_module, loadbalancer):
"""Find a Cloud Load Balancer by ID or name"""
clb = rax_module.cloud_loadbalancers
try:
found = clb.get(loadbalancer)
except Exception:
found = []
for lb in clb.list():
if loadbalancer == lb.name:
found.append(lb)
if not found:
module.fail_json(msg='No loadbalancer was matched')
if len(found) > 1:
module.fail_json(msg='Multiple loadbalancers matched')
# We made it this far, grab the first and hopefully only item
# in the list
found = found[0]
return found
def rax_argument_spec():
"""Return standard base dictionary used for the argument_spec
argument in AnsibleModule
"""
return dict(
api_key=dict(type='str', aliases=['password'], no_log=True),
auth_endpoint=dict(type='str'),
credentials=dict(type='path', aliases=['creds_file']),
env=dict(type='str'),
identity_type=dict(type='str', default='rackspace'),
region=dict(type='str'),
tenant_id=dict(type='str'),
tenant_name=dict(type='str'),
username=dict(type='str'),
validate_certs=dict(type='bool', aliases=['verify_ssl']),
)
def rax_required_together():
"""Return the default list used for the required_together argument to
AnsibleModule"""
return [['api_key', 'username']]
def setup_rax_module(module, rax_module, region_required=True):
"""Set up pyrax in a standard way for all modules"""
rax_module.USER_AGENT = 'ansible/%s %s' % (module.ansible_version,
rax_module.USER_AGENT)
api_key = module.params.get('api_key')
auth_endpoint = module.params.get('auth_endpoint')
credentials = module.params.get('credentials')
env = module.params.get('env')
identity_type = module.params.get('identity_type')
region = module.params.get('region')
tenant_id = module.params.get('tenant_id')
tenant_name = module.params.get('tenant_name')
username = module.params.get('username')
verify_ssl = module.params.get('validate_certs')
if env is not None:
rax_module.set_environment(env)
rax_module.set_setting('identity_type', identity_type)
if verify_ssl is not None:
rax_module.set_setting('verify_ssl', verify_ssl)
if auth_endpoint is not None:
rax_module.set_setting('auth_endpoint', auth_endpoint)
if tenant_id is not None:
rax_module.set_setting('tenant_id', tenant_id)
if tenant_name is not None:
rax_module.set_setting('tenant_name', tenant_name)
try:
username = username or os.environ.get('RAX_USERNAME')
if not username:
username = rax_module.get_setting('keyring_username')
if username:
api_key = 'USE_KEYRING'
if not api_key:
api_key = os.environ.get('RAX_API_KEY')
credentials = (credentials or os.environ.get('RAX_CREDENTIALS') or
os.environ.get('RAX_CREDS_FILE'))
region = (region or os.environ.get('RAX_REGION') or
rax_module.get_setting('region'))
except KeyError as e:
module.fail_json(msg='Unable to load %s' % e.message)
try:
if api_key and username:
if api_key == 'USE_KEYRING':
rax_module.keyring_auth(username, region=region)
else:
rax_module.set_credentials(username, api_key=api_key,
region=region)
elif credentials:
credentials = os.path.expanduser(credentials)
rax_module.set_credential_file(credentials, region=region)
else:
raise Exception('No credentials supplied!')
except Exception as e:
if e.message:
msg = str(e.message)
else:
msg = repr(e)
module.fail_json(msg=msg)
if region_required and region not in rax_module.regions:
module.fail_json(msg='%s is not a valid region, must be one of: %s' %
(region, ','.join(rax_module.regions)))
return rax_module
def rax_scaling_group_personality_file(module, files):
if not files:
return []
results = []
for rpath, lpath in files.items():
lpath = os.path.expanduser(lpath)
try:
with open(lpath, 'r') as f:
results.append({
'path': rpath,
'contents': f.read(),
})
except Exception as e:
module.fail_json(msg='Failed to load %s: %s' % (lpath, str(e)))
return results

View file

@ -15,10 +15,8 @@ __metaclass__ = type
import os import os
import re
import shutil import shutil
import tempfile import tempfile
import types
from ansible.module_utils.six.moves import configparser from ansible.module_utils.six.moves import configparser
@ -76,241 +74,3 @@ class RegistrationBase(object):
def subscribe(self, **kwargs): def subscribe(self, **kwargs):
raise NotImplementedError("Must be implemented by a sub-class") raise NotImplementedError("Must be implemented by a sub-class")
class Rhsm(RegistrationBase):
"""
DEPRECATION WARNING
This class is deprecated and will be removed in community.general 9.0.0.
There is no replacement for it; please contact the community.general
maintainers in case you are using it.
"""
def __init__(self, module, username=None, password=None):
RegistrationBase.__init__(self, module, username, password)
self.config = self._read_config()
self.module = module
self.module.deprecate(
'The Rhsm class is deprecated with no replacement.',
version='9.0.0',
collection_name='community.general',
)
def _read_config(self, rhsm_conf='/etc/rhsm/rhsm.conf'):
'''
Load RHSM configuration from /etc/rhsm/rhsm.conf.
Returns:
* ConfigParser object
'''
# Read RHSM defaults ...
cp = configparser.ConfigParser()
cp.read(rhsm_conf)
# Add support for specifying a default value w/o having to standup some configuration
# Yeah, I know this should be subclassed ... but, oh well
def get_option_default(self, key, default=''):
sect, opt = key.split('.', 1)
if self.has_section(sect) and self.has_option(sect, opt):
return self.get(sect, opt)
else:
return default
cp.get_option = types.MethodType(get_option_default, cp, configparser.ConfigParser)
return cp
def enable(self):
'''
Enable the system to receive updates from subscription-manager.
This involves updating affected yum plugins and removing any
conflicting yum repositories.
'''
RegistrationBase.enable(self)
self.update_plugin_conf('rhnplugin', False)
self.update_plugin_conf('subscription-manager', True)
def configure(self, **kwargs):
'''
Configure the system as directed for registration with RHN
Raises:
* Exception - if error occurs while running command
'''
args = ['subscription-manager', 'config']
# Pass supplied **kwargs as parameters to subscription-manager. Ignore
# non-configuration parameters and replace '_' with '.'. For example,
# 'server_hostname' becomes '--system.hostname'.
for k, v in kwargs.items():
if re.search(r'^(system|rhsm)_', k):
args.append('--%s=%s' % (k.replace('_', '.'), v))
self.module.run_command(args, check_rc=True)
@property
def is_registered(self):
'''
Determine whether the current system
Returns:
* Boolean - whether the current system is currently registered to
RHN.
'''
args = ['subscription-manager', 'identity']
rc, stdout, stderr = self.module.run_command(args, check_rc=False)
if rc == 0:
return True
else:
return False
def register(self, username, password, autosubscribe, activationkey):
'''
Register the current system to the provided RHN server
Raises:
* Exception - if error occurs while running command
'''
args = ['subscription-manager', 'register']
# Generate command arguments
if activationkey:
args.append('--activationkey "%s"' % activationkey)
else:
if autosubscribe:
args.append('--autosubscribe')
if username:
args.extend(['--username', username])
if password:
args.extend(['--password', password])
# Do the needful...
rc, stderr, stdout = self.module.run_command(args, check_rc=True)
def unsubscribe(self):
'''
Unsubscribe a system from all subscribed channels
Raises:
* Exception - if error occurs while running command
'''
args = ['subscription-manager', 'unsubscribe', '--all']
rc, stderr, stdout = self.module.run_command(args, check_rc=True)
def unregister(self):
'''
Unregister a currently registered system
Raises:
* Exception - if error occurs while running command
'''
args = ['subscription-manager', 'unregister']
rc, stderr, stdout = self.module.run_command(args, check_rc=True)
self.update_plugin_conf('rhnplugin', False)
self.update_plugin_conf('subscription-manager', False)
def subscribe(self, regexp):
'''
Subscribe current system to available pools matching the specified
regular expression
Raises:
* Exception - if error occurs while running command
'''
# Available pools ready for subscription
available_pools = RhsmPools(self.module)
for pool in available_pools.filter(regexp):
pool.subscribe()
class RhsmPool(object):
"""
Convenience class for housing subscription information
DEPRECATION WARNING
This class is deprecated and will be removed in community.general 9.0.0.
There is no replacement for it; please contact the community.general
maintainers in case you are using it.
"""
def __init__(self, module, **kwargs):
self.module = module
for k, v in kwargs.items():
setattr(self, k, v)
self.module.deprecate(
'The RhsmPool class is deprecated with no replacement.',
version='9.0.0',
collection_name='community.general',
)
def __str__(self):
return str(self.__getattribute__('_name'))
def subscribe(self):
args = "subscription-manager subscribe --pool %s" % self.PoolId
rc, stdout, stderr = self.module.run_command(args, check_rc=True)
if rc == 0:
return True
else:
return False
class RhsmPools(object):
"""
This class is used for manipulating pools subscriptions with RHSM
DEPRECATION WARNING
This class is deprecated and will be removed in community.general 9.0.0.
There is no replacement for it; please contact the community.general
maintainers in case you are using it.
"""
def __init__(self, module):
self.module = module
self.products = self._load_product_list()
self.module.deprecate(
'The RhsmPools class is deprecated with no replacement.',
version='9.0.0',
collection_name='community.general',
)
def __iter__(self):
return self.products.__iter__()
def _load_product_list(self):
"""
Loads list of all available pools for system in data structure
"""
args = "subscription-manager list --available"
rc, stdout, stderr = self.module.run_command(args, check_rc=True)
products = []
for line in stdout.split('\n'):
# Remove leading+trailing whitespace
line = line.strip()
# An empty line implies the end of an output group
if len(line) == 0:
continue
# If a colon ':' is found, parse
elif ':' in line:
(key, value) = line.split(':', 1)
key = key.strip().replace(" ", "") # To unify
value = value.strip()
if key in ['ProductName', 'SubscriptionName']:
# Remember the name for later processing
products.append(RhsmPool(self.module, _name=value, key=value))
elif products:
# Associate value with most recently recorded product
products[-1].__setattr__(key, value)
# FIXME - log some warning?
# else:
# warnings.warn("Unhandled subscription key/value: %s/%s" % (key,value))
return products
def filter(self, regexp='^$'):
'''
Return a list of RhsmPools whose name matches the provided regular expression
'''
r = re.compile(regexp)
for product in self.products:
if r.search(product._name):
yield product

View file

@ -73,16 +73,6 @@ options:
- Using O(force=true) is mandatory when downgrading. - Using O(force=true) is mandatory when downgrading.
type: bool type: bool
default: false default: false
ack_ansible29:
description:
- This option has no longer any effect and will be removed in community.general 9.0.0.
type: bool
default: false
ack_min_ansiblecore211:
description:
- This option has no longer any effect and will be removed in community.general 9.0.0.
type: bool
default: false
""" """
EXAMPLES = """ EXAMPLES = """
@ -202,18 +192,6 @@ class AnsibleGalaxyInstall(ModuleHelper):
dest=dict(type='path'), dest=dict(type='path'),
force=dict(type='bool', default=False), force=dict(type='bool', default=False),
no_deps=dict(type='bool', default=False), no_deps=dict(type='bool', default=False),
ack_ansible29=dict(
type='bool',
default=False,
removed_in_version='9.0.0',
removed_from_collection='community.general',
),
ack_min_ansiblecore211=dict(
type='bool',
default=False,
removed_in_version='9.0.0',
removed_from_collection='community.general',
),
), ),
mutually_exclusive=[('name', 'requirements_file')], mutually_exclusive=[('name', 'requirements_file')],
required_one_of=[('name', 'requirements_file')], required_one_of=[('name', 'requirements_file')],

View file

@ -68,9 +68,10 @@ options:
mode: mode:
description: description:
- Controls the module behavior. See notes below for more details. - Controls the module behavior. See notes below for more details.
- Default is V(compatibility) but that behavior is deprecated and will be changed to V(new) in community.general 9.0.0. - The default changed from V(compatibility) to V(new) in community.general 9.0.0.
type: str type: str
choices: [compatibility, new] choices: [compatibility, new]
default: new
version_added: 3.0.0 version_added: 3.0.0
name_check: name_check:
description: description:
@ -80,12 +81,16 @@ options:
notes: notes:
- Please note that U(http://search.cpan.org/dist/App-cpanminus/bin/cpanm, cpanm) must be installed on the remote host. - Please note that U(http://search.cpan.org/dist/App-cpanminus/bin/cpanm, cpanm) must be installed on the remote host.
- "This module now comes with a choice of execution O(mode): V(compatibility) or V(new)." - "This module now comes with a choice of execution O(mode): V(compatibility) or V(new)."
- "O(mode=compatibility): When using V(compatibility) mode, the module will keep backward compatibility. This is the default mode. - >
O(mode=compatibility): When using V(compatibility) mode, the module will keep backward compatibility.
This was the default mode before community.general 9.0.0.
O(name) must be either a module name or a distribution file. If the perl module given by O(name) is installed (at the exact O(version) O(name) must be either a module name or a distribution file. If the perl module given by O(name) is installed (at the exact O(version)
when specified), then nothing happens. Otherwise, it will be installed using the C(cpanm) executable. O(name) cannot be an URL, or a git URL. when specified), then nothing happens. Otherwise, it will be installed using the C(cpanm) executable. O(name) cannot be an URL, or a git URL.
C(cpanm) version specifiers do not work in this mode." C(cpanm) version specifiers do not work in this mode.
- "O(mode=new): When using V(new) mode, the module will behave differently. The O(name) parameter may refer to a module name, a distribution file, - >
a HTTP URL or a git repository URL as described in C(cpanminus) documentation. C(cpanm) version specifiers are recognized." O(mode=new): When using V(new) mode, the module will behave differently. The O(name) parameter may refer to a module name, a distribution file,
a HTTP URL or a git repository URL as described in C(cpanminus) documentation. C(cpanm) version specifiers are recognized.
This is the default mode from community.general 9.0.0 onwards.
author: author:
- "Franck Cuny (@fcuny)" - "Franck Cuny (@fcuny)"
- "Alexei Znamensky (@russoz)" - "Alexei Znamensky (@russoz)"
@ -150,7 +155,7 @@ class CPANMinus(ModuleHelper):
mirror_only=dict(type='bool', default=False), mirror_only=dict(type='bool', default=False),
installdeps=dict(type='bool', default=False), installdeps=dict(type='bool', default=False),
executable=dict(type='path'), executable=dict(type='path'),
mode=dict(type='str', choices=['compatibility', 'new']), mode=dict(type='str', default='new', choices=['compatibility', 'new']),
name_check=dict(type='str') name_check=dict(type='str')
), ),
required_one_of=[('name', 'from_path')], required_one_of=[('name', 'from_path')],
@ -168,14 +173,6 @@ class CPANMinus(ModuleHelper):
def __init_module__(self): def __init_module__(self):
v = self.vars v = self.vars
if v.mode is None:
self.deprecate(
"The default value 'compatibility' for parameter 'mode' is being deprecated "
"and it will be replaced by 'new'",
version="9.0.0",
collection_name="community.general"
)
v.mode = "compatibility"
if v.mode == "compatibility": if v.mode == "compatibility":
if v.name_check: if v.name_check:
self.do_raise("Parameter name_check can only be used with mode=new") self.do_raise("Parameter name_check can only be used with mode=new")

View file

@ -28,23 +28,16 @@ options:
command: command:
description: description:
- The name of the Django management command to run. The commands listed below are built in this module and have some basic parameter validation. - The name of the Django management command to run. The commands listed below are built in this module and have some basic parameter validation.
- >
V(cleanup) - clean up old data from the database (deprecated in Django 1.5). This parameter will be
removed in community.general 9.0.0. Use V(clearsessions) instead.
- V(collectstatic) - Collects the static files into C(STATIC_ROOT). - V(collectstatic) - Collects the static files into C(STATIC_ROOT).
- V(createcachetable) - Creates the cache tables for use with the database cache backend. - V(createcachetable) - Creates the cache tables for use with the database cache backend.
- V(flush) - Removes all data from the database. - V(flush) - Removes all data from the database.
- V(loaddata) - Searches for and loads the contents of the named O(fixtures) into the database. - V(loaddata) - Searches for and loads the contents of the named O(fixtures) into the database.
- V(migrate) - Synchronizes the database state with models and migrations. - V(migrate) - Synchronizes the database state with models and migrations.
- >
V(syncdb) - Synchronizes the database state with models and migrations (deprecated in Django 1.7).
This parameter will be removed in community.general 9.0.0. Use V(migrate) instead.
- V(test) - Runs tests for all installed apps. - V(test) - Runs tests for all installed apps.
- > - Other commands can be entered, but will fail if they are unknown to Django. Other commands that may
V(validate) - Validates all installed models (deprecated in Django 1.7). This parameter will be
removed in community.general 9.0.0. Use V(check) instead.
- Other commands can be entered, but will fail if they are unknown to Django. Other commands that may
prompt for user input should be run with the C(--noinput) flag. prompt for user input should be run with the C(--noinput) flag.
- Support for the values V(cleanup), V(syncdb), V(validate) was removed in community.general 9.0.0.
See note about supported versions of Django.
type: str type: str
required: true required: true
project_path: project_path:
@ -69,6 +62,7 @@ options:
virtualenv: virtualenv:
description: description:
- An optional path to a C(virtualenv) installation to use while running the manage application. - An optional path to a C(virtualenv) installation to use while running the manage application.
- The virtual environment must exist, otherwise the module will fail.
type: path type: path
aliases: [virtual_env] aliases: [virtual_env]
apps: apps:
@ -132,31 +126,24 @@ options:
aliases: [test_runner] aliases: [test_runner]
ack_venv_creation_deprecation: ack_venv_creation_deprecation:
description: description:
- >- - This option no longer has any effect since community.general 9.0.0.
When a O(virtualenv) is set but the virtual environment does not exist, the current behavior is - It will be removed from community.general 11.0.0.
to create a new virtual environment. That behavior is deprecated and if that case happens it will
generate a deprecation warning. Set this flag to V(true) to suppress the deprecation warning.
- Please note that you will receive no further warning about this being removed until the module
will start failing in such cases from community.general 9.0.0 on.
type: bool type: bool
version_added: 5.8.0 version_added: 5.8.0
notes: notes:
- > - >
B(ATTENTION - DEPRECATION): Support for Django releases older than 4.1 will be removed in B(ATTENTION): Support for Django releases older than 4.1 has been removed in
community.general version 9.0.0 (estimated to be released in May 2024). community.general version 9.0.0. While the module allows for free-form commands
Please notice that Django 4.1 requires Python 3.8 or greater. does not verify the version of Django being used, it is B(strongly recommended)
- C(virtualenv) (U(http://www.virtualenv.org)) must be installed on the remote host if the O(virtualenv) parameter to use a more recent version of Django.
is specified. This requirement is deprecated and will be removed in community.general version 9.0.0. - Please notice that Django 4.1 requires Python 3.8 or greater.
- This module will create a virtualenv if the O(virtualenv) parameter is specified and a virtual environment does not already - This module will not create a virtualenv if the O(virtualenv) parameter is specified and a virtual environment
exist at the given location. This behavior is deprecated and will be removed in community.general version 9.0.0. does not already exist at the given location. This behavior changed in community.general version 9.0.0.
- The parameter O(virtualenv) will remain in use, but it will require the specified virtualenv to exist. - The recommended way to create a virtual environment in Ansible is by using M(ansible.builtin.pip).
The recommended way to create one in Ansible is by using M(ansible.builtin.pip).
- This module assumes English error messages for the V(createcachetable) command to detect table existence, - This module assumes English error messages for the V(createcachetable) command to detect table existence,
unfortunately. unfortunately.
- To be able to use the V(migrate) command with django versions < 1.7, you must have C(south) installed and added - To be able to use the V(collectstatic) command, you must have enabled C(staticfiles) in your settings.
as an app in your settings.
- To be able to use the V(collectstatic) command, you must have enabled staticfiles in your settings.
- Your C(manage.py) application must be executable (C(rwxr-xr-x)), and must have a valid shebang, - Your C(manage.py) application must be executable (C(rwxr-xr-x)), and must have a valid shebang,
for example C(#!/usr/bin/env python), for invoking the appropriate Python interpreter. for example C(#!/usr/bin/env python), for invoking the appropriate Python interpreter.
seealso: seealso:
@ -169,7 +156,7 @@ seealso:
- name: What Python version can I use with Django? - name: What Python version can I use with Django?
description: From the Django FAQ, the response to Python requirements for the framework. description: From the Django FAQ, the response to Python requirements for the framework.
link: https://docs.djangoproject.com/en/dev/faq/install/#what-python-version-can-i-use-with-django link: https://docs.djangoproject.com/en/dev/faq/install/#what-python-version-can-i-use-with-django
requirements: [ "virtualenv", "django" ] requirements: [ "django >= 4.1" ]
author: author:
- Alexei Znamensky (@russoz) - Alexei Znamensky (@russoz)
- Scott Anderson (@tastychutney) - Scott Anderson (@tastychutney)
@ -178,7 +165,7 @@ author:
EXAMPLES = """ EXAMPLES = """
- name: Run cleanup on the application installed in django_dir - name: Run cleanup on the application installed in django_dir
community.general.django_manage: community.general.django_manage:
command: cleanup command: clearsessions
project_path: "{{ django_dir }}" project_path: "{{ django_dir }}"
- name: Load the initial_data fixture into the application - name: Load the initial_data fixture into the application
@ -189,7 +176,7 @@ EXAMPLES = """
- name: Run syncdb on the application - name: Run syncdb on the application
community.general.django_manage: community.general.django_manage:
command: syncdb command: migrate
project_path: "{{ django_dir }}" project_path: "{{ django_dir }}"
settings: "{{ settings_app_name }}" settings: "{{ settings_app_name }}"
pythonpath: "{{ settings_dir }}" pythonpath: "{{ settings_dir }}"
@ -233,22 +220,7 @@ def _ensure_virtualenv(module):
activate = os.path.join(vbin, 'activate') activate = os.path.join(vbin, 'activate')
if not os.path.exists(activate): if not os.path.exists(activate):
# In version 9.0.0, if the venv is not found, it should fail_json() here. module.fail_json(msg='%s does not point to a valid virtual environment' % venv_param)
if not module.params['ack_venv_creation_deprecation']:
module.deprecate(
'The behavior of "creating the virtual environment when missing" is being '
'deprecated and will be removed in community.general version 9.0.0. '
'Set the module parameter `ack_venv_creation_deprecation: true` to '
'prevent this message from showing up when creating a virtualenv.',
version='9.0.0',
collection_name='community.general',
)
virtualenv = module.get_bin_path('virtualenv', True)
vcmd = [virtualenv, venv_param]
rc, out_venv, err_venv = module.run_command(vcmd)
if rc != 0:
_fail(module, vcmd, out_venv, err_venv)
os.environ["PATH"] = "%s:%s" % (vbin, os.environ["PATH"]) os.environ["PATH"] = "%s:%s" % (vbin, os.environ["PATH"])
os.environ["VIRTUAL_ENV"] = venv_param os.environ["VIRTUAL_ENV"] = venv_param
@ -266,11 +238,6 @@ def loaddata_filter_output(line):
return "Installed" in line and "Installed 0 object" not in line return "Installed" in line and "Installed 0 object" not in line
def syncdb_filter_output(line):
return ("Creating table " in line) \
or ("Installed" in line and "Installed 0 object" not in line)
def migrate_filter_output(line): def migrate_filter_output(line):
return ("Migrating forwards " in line) \ return ("Migrating forwards " in line) \
or ("Installed" in line and "Installed 0 object" not in line) \ or ("Installed" in line and "Installed 0 object" not in line) \
@ -283,13 +250,10 @@ def collectstatic_filter_output(line):
def main(): def main():
command_allowed_param_map = dict( command_allowed_param_map = dict(
cleanup=(),
createcachetable=('cache_table', 'database', ), createcachetable=('cache_table', 'database', ),
flush=('database', ), flush=('database', ),
loaddata=('database', 'fixtures', ), loaddata=('database', 'fixtures', ),
syncdb=('database', ),
test=('failfast', 'testrunner', 'apps', ), test=('failfast', 'testrunner', 'apps', ),
validate=(),
migrate=('apps', 'skip', 'merge', 'database',), migrate=('apps', 'skip', 'merge', 'database',),
collectstatic=('clear', 'link', ), collectstatic=('clear', 'link', ),
) )
@ -301,7 +265,6 @@ def main():
# forces --noinput on every command that needs it # forces --noinput on every command that needs it
noinput_commands = ( noinput_commands = (
'flush', 'flush',
'syncdb',
'migrate', 'migrate',
'test', 'test',
'collectstatic', 'collectstatic',
@ -333,7 +296,7 @@ def main():
skip=dict(type='bool'), skip=dict(type='bool'),
merge=dict(type='bool'), merge=dict(type='bool'),
link=dict(type='bool'), link=dict(type='bool'),
ack_venv_creation_deprecation=dict(type='bool'), ack_venv_creation_deprecation=dict(type='bool', removed_in_version='11.0.0', removed_from_collection='community.general'),
), ),
) )
@ -342,21 +305,6 @@ def main():
project_path = module.params['project_path'] project_path = module.params['project_path']
virtualenv = module.params['virtualenv'] virtualenv = module.params['virtualenv']
try:
_deprecation = dict(
cleanup="clearsessions",
syncdb="migrate",
validate="check",
)
module.deprecate(
'The command {0} has been deprecated as it is no longer supported in recent Django versions.'
'Please use the command {1} instead that provide similar capability.'.format(command_bin, _deprecation[command_bin]),
version='9.0.0',
collection_name='community.general'
)
except KeyError:
pass
for param in specific_params: for param in specific_params:
value = module.params[param] value = module.params[param]
if value and param not in command_allowed_param_map[command_bin]: if value and param not in command_allowed_param_map[command_bin]:

View file

@ -1,211 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright 2013 Matt Coddington <coddington@gmail.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
deprecated:
removed_in: 9.0.0
why: the endpoints this module relies on do not exist any more and do not resolve to IPs in DNS.
alternative: no known alternative at this point
module: flowdock
author: "Matt Coddington (@mcodd)"
short_description: Send a message to a flowdock
description:
- Send a message to a flowdock team inbox or chat using the push API (see https://www.flowdock.com/api/team-inbox and https://www.flowdock.com/api/chat)
extends_documentation_fragment:
- community.general.attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
options:
token:
type: str
description:
- API token.
required: true
type:
type: str
description:
- Whether to post to 'inbox' or 'chat'
required: true
choices: [ "inbox", "chat" ]
msg:
type: str
description:
- Content of the message
required: true
tags:
type: str
description:
- tags of the message, separated by commas
required: false
external_user_name:
type: str
description:
- (chat only - required) Name of the "user" sending the message
required: false
from_address:
type: str
description:
- (inbox only - required) Email address of the message sender
required: false
source:
type: str
description:
- (inbox only - required) Human readable identifier of the application that uses the Flowdock API
required: false
subject:
type: str
description:
- (inbox only - required) Subject line of the message
required: false
from_name:
type: str
description:
- (inbox only) Name of the message sender
required: false
reply_to:
type: str
description:
- (inbox only) Email address for replies
required: false
project:
type: str
description:
- (inbox only) Human readable identifier for more detailed message categorization
required: false
link:
type: str
description:
- (inbox only) Link associated with the message. This will be used to link the message subject in Team Inbox.
required: false
validate_certs:
description:
- If V(false), SSL certificates will not be validated. This should only be used
on personally controlled sites using self-signed certificates.
required: false
default: true
type: bool
requirements: [ ]
'''
EXAMPLES = '''
- name: Send a message to a flowdock
community.general.flowdock:
type: inbox
token: AAAAAA
from_address: user@example.com
source: my cool app
msg: test from ansible
subject: test subject
- name: Send a message to a flowdock
community.general.flowdock:
type: chat
token: AAAAAA
external_user_name: testuser
msg: test from ansible
tags: tag1,tag2,tag3
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six.moves.urllib.parse import urlencode
from ansible.module_utils.urls import fetch_url
# ===========================================
# Module execution.
#
def main():
module = AnsibleModule(
argument_spec=dict(
token=dict(required=True, no_log=True),
msg=dict(required=True),
type=dict(required=True, choices=["inbox", "chat"]),
external_user_name=dict(required=False),
from_address=dict(required=False),
source=dict(required=False),
subject=dict(required=False),
from_name=dict(required=False),
reply_to=dict(required=False),
project=dict(required=False),
tags=dict(required=False),
link=dict(required=False),
validate_certs=dict(default=True, type='bool'),
),
supports_check_mode=True
)
type = module.params["type"]
token = module.params["token"]
if type == 'inbox':
url = "https://api.flowdock.com/v1/messages/team_inbox/%s" % (token)
else:
url = "https://api.flowdock.com/v1/messages/chat/%s" % (token)
params = {}
# required params
params['content'] = module.params["msg"]
# required params for the 'chat' type
if module.params['external_user_name']:
if type == 'inbox':
module.fail_json(msg="external_user_name is not valid for the 'inbox' type")
else:
params['external_user_name'] = module.params["external_user_name"]
elif type == 'chat':
module.fail_json(msg="external_user_name is required for the 'chat' type")
# required params for the 'inbox' type
for item in ['from_address', 'source', 'subject']:
if module.params[item]:
if type == 'chat':
module.fail_json(msg="%s is not valid for the 'chat' type" % item)
else:
params[item] = module.params[item]
elif type == 'inbox':
module.fail_json(msg="%s is required for the 'inbox' type" % item)
# optional params
if module.params["tags"]:
params['tags'] = module.params["tags"]
# optional params for the 'inbox' type
for item in ['from_name', 'reply_to', 'project', 'link']:
if module.params[item]:
if type == 'chat':
module.fail_json(msg="%s is not valid for the 'chat' type" % item)
else:
params[item] = module.params[item]
# If we're in check mode, just exit pretending like we succeeded
if module.check_mode:
module.exit_json(changed=False)
# Send the data to Flowdock
data = urlencode(params)
response, info = fetch_url(module, url, data=data)
if info['status'] != 200:
module.fail_json(msg="unable to send msg: %s" % info['msg'])
module.exit_json(changed=True, msg=module.params["msg"])
if __name__ == '__main__':
main()

View file

@ -15,7 +15,7 @@ short_description: Management of instances in Proxmox VE cluster
description: description:
- Allows you to create/delete/stop instances in Proxmox VE cluster. - Allows you to create/delete/stop instances in Proxmox VE cluster.
- The module automatically detects containerization type (lxc for PVE 4, openvz for older). - The module automatically detects containerization type (lxc for PVE 4, openvz for older).
- Since community.general 4.0.0 on, there are no more default values, see O(proxmox_default_behavior). - Since community.general 4.0.0 on, there are no more default values.
attributes: attributes:
check_mode: check_mode:
support: none support: none
@ -47,28 +47,23 @@ options:
comma-delimited list C([volume=]<volume> [,acl=<1|0>] [,mountoptions=<opt[;opt...]>] [,quota=<1|0>] comma-delimited list C([volume=]<volume> [,acl=<1|0>] [,mountoptions=<opt[;opt...]>] [,quota=<1|0>]
[,replicate=<1|0>] [,ro=<1|0>] [,shared=<1|0>] [,size=<DiskSize>])." [,replicate=<1|0>] [,ro=<1|0>] [,shared=<1|0>] [,size=<DiskSize>])."
- See U(https://pve.proxmox.com/wiki/Linux_Container) for a full description. - See U(https://pve.proxmox.com/wiki/Linux_Container) for a full description.
- This option has no default unless O(proxmox_default_behavior) is set to V(compatibility); then the default is V(3).
- Should not be used in conjunction with O(storage). - Should not be used in conjunction with O(storage).
type: str type: str
cores: cores:
description: description:
- Specify number of cores per socket. - Specify number of cores per socket.
- This option has no default unless O(proxmox_default_behavior) is set to V(compatibility); then the default is V(1).
type: int type: int
cpus: cpus:
description: description:
- numbers of allocated cpus for instance - numbers of allocated cpus for instance
- This option has no default unless O(proxmox_default_behavior) is set to V(compatibility); then the default is V(1).
type: int type: int
memory: memory:
description: description:
- memory size in MB for instance - memory size in MB for instance
- This option has no default unless O(proxmox_default_behavior) is set to V(compatibility); then the default is V(512).
type: int type: int
swap: swap:
description: description:
- swap memory size in MB for instance - swap memory size in MB for instance
- This option has no default unless O(proxmox_default_behavior) is set to V(compatibility); then the default is V(0).
type: int type: int
netif: netif:
description: description:
@ -101,7 +96,6 @@ options:
onboot: onboot:
description: description:
- specifies whether a VM will be started during system bootup - specifies whether a VM will be started during system bootup
- This option has no default unless O(proxmox_default_behavior) is set to V(compatibility); then the default is V(false).
type: bool type: bool
storage: storage:
description: description:
@ -120,7 +114,6 @@ options:
cpuunits: cpuunits:
description: description:
- CPU weight for a VM - CPU weight for a VM
- This option has no default unless O(proxmox_default_behavior) is set to V(compatibility); then the default is V(1000).
type: int type: int
nameserver: nameserver:
description: description:
@ -200,25 +193,6 @@ options:
- The special value V(host) configures the same timezone used by Proxmox host. - The special value V(host) configures the same timezone used by Proxmox host.
type: str type: str
version_added: '7.1.0' version_added: '7.1.0'
proxmox_default_behavior:
description:
- As of community.general 4.0.0, various options no longer have default values.
These default values caused problems when users expected different behavior from Proxmox
by default or filled options which caused problems when set.
- The value V(compatibility) (default before community.general 4.0.0) will ensure that the default values
are used when the values are not explicitly specified by the user. The new default is V(no_defaults),
which makes sure these options have no defaults.
- This affects the O(disk), O(cores), O(cpus), O(memory), O(onboot), O(swap), and O(cpuunits) options.
- >
This parameter is now B(deprecated) and it will be removed in community.general 10.0.0.
By then, the module's behavior should be to not set default values, equivalent to V(no_defaults).
If a consistent set of defaults is needed, the playbook or role should be responsible for setting it.
type: str
default: no_defaults
choices:
- compatibility
- no_defaults
version_added: "1.3.0"
clone: clone:
description: description:
- ID of the container to be cloned. - ID of the container to be cloned.
@ -785,8 +759,6 @@ def main():
description=dict(type='str'), description=dict(type='str'),
hookscript=dict(type='str'), hookscript=dict(type='str'),
timezone=dict(type='str'), timezone=dict(type='str'),
proxmox_default_behavior=dict(type='str', default='no_defaults', choices=['compatibility', 'no_defaults'],
removed_in_version='9.0.0', removed_from_collection='community.general'),
clone=dict(type='int'), clone=dict(type='int'),
clone_type=dict(default='opportunistic', choices=['full', 'linked', 'opportunistic']), clone_type=dict(default='opportunistic', choices=['full', 'linked', 'opportunistic']),
tags=dict(type='list', elements='str') tags=dict(type='list', elements='str')
@ -827,20 +799,6 @@ def main():
timeout = module.params['timeout'] timeout = module.params['timeout']
clone = module.params['clone'] clone = module.params['clone']
if module.params['proxmox_default_behavior'] == 'compatibility':
old_default_values = dict(
disk="3",
cores=1,
cpus=1,
memory=512,
swap=0,
onboot=False,
cpuunits=1000,
)
for param, value in old_default_values.items():
if module.params[param] is None:
module.params[param] = value
# If vmid not set get the Next VM id from ProxmoxAPI # If vmid not set get the Next VM id from ProxmoxAPI
# If hostname is set get the VM id from ProxmoxAPI # If hostname is set get the VM id from ProxmoxAPI
if not vmid and state == 'present': if not vmid and state == 'present':

View file

@ -1,903 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: rax
short_description: Create / delete an instance in Rackspace Public Cloud
description:
- creates / deletes a Rackspace Public Cloud instance and optionally
waits for it to be 'running'.
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
auto_increment:
description:
- Whether or not to increment a single number with the name of the
created servers. Only applicable when used with the O(group) attribute
or meta key.
type: bool
default: true
boot_from_volume:
description:
- Whether or not to boot the instance from a Cloud Block Storage volume.
If V(true) and O(image) is specified a new volume will be created at
boot time. O(boot_volume_size) is required with O(image) to create a
new volume at boot time.
type: bool
default: false
boot_volume:
type: str
description:
- Cloud Block Storage ID or Name to use as the boot volume of the
instance
boot_volume_size:
type: int
description:
- Size of the volume to create in Gigabytes. This is only required with
O(image) and O(boot_from_volume).
default: 100
boot_volume_terminate:
description:
- Whether the O(boot_volume) or newly created volume from O(image) will
be terminated when the server is terminated
type: bool
default: false
config_drive:
description:
- Attach read-only configuration drive to server as label config-2
type: bool
default: false
count:
type: int
description:
- number of instances to launch
default: 1
count_offset:
type: int
description:
- number count to start at
default: 1
disk_config:
type: str
description:
- Disk partitioning strategy
- If not specified it will assume the value V(auto).
choices:
- auto
- manual
exact_count:
description:
- Explicitly ensure an exact count of instances, used with
state=active/present. If specified as V(true) and O(count) is less than
the servers matched, servers will be deleted to match the count. If
the number of matched servers is fewer than specified in O(count)
additional servers will be added.
type: bool
default: false
extra_client_args:
type: dict
default: {}
description:
- A hash of key/value pairs to be used when creating the cloudservers
client. This is considered an advanced option, use it wisely and
with caution.
extra_create_args:
type: dict
default: {}
description:
- A hash of key/value pairs to be used when creating a new server.
This is considered an advanced option, use it wisely and with caution.
files:
type: dict
default: {}
description:
- Files to insert into the instance. remotefilename:localcontent
flavor:
type: str
description:
- flavor to use for the instance
group:
type: str
description:
- host group to assign to server, is also used for idempotent operations
to ensure a specific number of instances
image:
type: str
description:
- image to use for the instance. Can be an C(id), C(human_id) or C(name).
With O(boot_from_volume), a Cloud Block Storage volume will be created
with this image
instance_ids:
type: list
elements: str
description:
- list of instance ids, currently only used when state='absent' to
remove instances
key_name:
type: str
description:
- key pair to use on the instance
aliases:
- keypair
meta:
type: dict
default: {}
description:
- A hash of metadata to associate with the instance
name:
type: str
description:
- Name to give the instance
networks:
type: list
elements: str
description:
- The network to attach to the instances. If specified, you must include
ALL networks including the public and private interfaces. Can be C(id)
or C(label).
default:
- public
- private
state:
type: str
description:
- Indicate desired state of the resource
choices:
- present
- absent
default: present
user_data:
type: str
description:
- Data to be uploaded to the servers config drive. This option implies
O(config_drive). Can be a file path or a string
wait:
description:
- wait for the instance to be in state 'running' before returning
type: bool
default: false
wait_timeout:
type: int
description:
- how long before wait gives up, in seconds
default: 300
author:
- "Jesse Keating (@omgjlk)"
- "Matt Martz (@sivel)"
notes:
- O(exact_count) can be "destructive" if the number of running servers in
the O(group) is larger than that specified in O(count). In such a case, the
O(state) is effectively set to V(absent) and the extra servers are deleted.
In the case of deletion, the returned data structure will have RV(ignore:action)
set to V(delete), and the oldest servers in the group will be deleted.
extends_documentation_fragment:
- community.general.rackspace.openstack
- community.general.attributes
'''
EXAMPLES = '''
- name: Build a Cloud Server
gather_facts: false
tasks:
- name: Server build request
local_action:
module: rax
credentials: ~/.raxpub
name: rax-test1
flavor: 5
image: b11d9567-e412-4255-96b9-bd63ab23bcfe
key_name: my_rackspace_key
files:
/root/test.txt: /home/localuser/test.txt
wait: true
state: present
networks:
- private
- public
register: rax
- name: Build an exact count of cloud servers with incremented names
hosts: local
gather_facts: false
tasks:
- name: Server build requests
local_action:
module: rax
credentials: ~/.raxpub
name: test%03d.example.org
flavor: performance1-1
image: ubuntu-1204-lts-precise-pangolin
state: present
count: 10
count_offset: 10
exact_count: true
group: test
wait: true
register: rax
'''
import json
import os
import re
import time
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.rax import (FINAL_STATUSES, rax_argument_spec, rax_find_bootable_volume,
rax_find_image, rax_find_network, rax_find_volume,
rax_required_together, rax_to_dict, setup_rax_module)
from ansible.module_utils.six.moves import xrange
from ansible.module_utils.six import string_types
def rax_find_server_image(module, server, image, boot_volume):
if not image and boot_volume:
vol = rax_find_bootable_volume(module, pyrax, server,
exit=False)
if not vol:
return None
volume_image_metadata = vol.volume_image_metadata
vol_image_id = volume_image_metadata.get('image_id')
if vol_image_id:
server_image = rax_find_image(module, pyrax,
vol_image_id, exit=False)
if server_image:
server.image = dict(id=server_image)
# Match image IDs taking care of boot from volume
if image and not server.image:
vol = rax_find_bootable_volume(module, pyrax, server)
volume_image_metadata = vol.volume_image_metadata
vol_image_id = volume_image_metadata.get('image_id')
if not vol_image_id:
return None
server_image = rax_find_image(module, pyrax,
vol_image_id, exit=False)
if image != server_image:
return None
server.image = dict(id=server_image)
elif image and server.image['id'] != image:
return None
return server.image
def create(module, names=None, flavor=None, image=None, meta=None, key_name=None,
files=None, wait=True, wait_timeout=300, disk_config=None,
group=None, nics=None, extra_create_args=None, user_data=None,
config_drive=False, existing=None, block_device_mapping_v2=None):
names = [] if names is None else names
meta = {} if meta is None else meta
files = {} if files is None else files
nics = [] if nics is None else nics
extra_create_args = {} if extra_create_args is None else extra_create_args
existing = [] if existing is None else existing
block_device_mapping_v2 = [] if block_device_mapping_v2 is None else block_device_mapping_v2
cs = pyrax.cloudservers
changed = False
if user_data:
config_drive = True
if user_data and os.path.isfile(os.path.expanduser(user_data)):
try:
user_data = os.path.expanduser(user_data)
f = open(user_data)
user_data = f.read()
f.close()
except Exception as e:
module.fail_json(msg='Failed to load %s' % user_data)
# Handle the file contents
for rpath in files.keys():
lpath = os.path.expanduser(files[rpath])
try:
fileobj = open(lpath, 'r')
files[rpath] = fileobj.read()
fileobj.close()
except Exception as e:
module.fail_json(msg='Failed to load %s' % lpath)
try:
servers = []
bdmv2 = block_device_mapping_v2
for name in names:
servers.append(cs.servers.create(name=name, image=image,
flavor=flavor, meta=meta,
key_name=key_name,
files=files, nics=nics,
disk_config=disk_config,
config_drive=config_drive,
userdata=user_data,
block_device_mapping_v2=bdmv2,
**extra_create_args))
except Exception as e:
if e.message:
msg = str(e.message)
else:
msg = repr(e)
module.fail_json(msg=msg)
else:
changed = True
if wait:
end_time = time.time() + wait_timeout
infinite = wait_timeout == 0
while infinite or time.time() < end_time:
for server in servers:
try:
server.get()
except Exception:
server.status = 'ERROR'
if not filter(lambda s: s.status not in FINAL_STATUSES,
servers):
break
time.sleep(5)
success = []
error = []
timeout = []
for server in servers:
try:
server.get()
except Exception:
server.status = 'ERROR'
instance = rax_to_dict(server, 'server')
if server.status == 'ACTIVE' or not wait:
success.append(instance)
elif server.status == 'ERROR':
error.append(instance)
elif wait:
timeout.append(instance)
untouched = [rax_to_dict(s, 'server') for s in existing]
instances = success + untouched
results = {
'changed': changed,
'action': 'create',
'instances': instances,
'success': success,
'error': error,
'timeout': timeout,
'instance_ids': {
'instances': [i['id'] for i in instances],
'success': [i['id'] for i in success],
'error': [i['id'] for i in error],
'timeout': [i['id'] for i in timeout]
}
}
if timeout:
results['msg'] = 'Timeout waiting for all servers to build'
elif error:
results['msg'] = 'Failed to build all servers'
if 'msg' in results:
module.fail_json(**results)
else:
module.exit_json(**results)
def delete(module, instance_ids=None, wait=True, wait_timeout=300, kept=None):
instance_ids = [] if instance_ids is None else instance_ids
kept = [] if kept is None else kept
cs = pyrax.cloudservers
changed = False
instances = {}
servers = []
for instance_id in instance_ids:
servers.append(cs.servers.get(instance_id))
for server in servers:
try:
server.delete()
except Exception as e:
module.fail_json(msg=e.message)
else:
changed = True
instance = rax_to_dict(server, 'server')
instances[instance['id']] = instance
# If requested, wait for server deletion
if wait:
end_time = time.time() + wait_timeout
infinite = wait_timeout == 0
while infinite or time.time() < end_time:
for server in servers:
instance_id = server.id
try:
server.get()
except Exception:
instances[instance_id]['status'] = 'DELETED'
instances[instance_id]['rax_status'] = 'DELETED'
if not filter(lambda s: s['status'] not in ('', 'DELETED',
'ERROR'),
instances.values()):
break
time.sleep(5)
timeout = filter(lambda s: s['status'] not in ('', 'DELETED', 'ERROR'),
instances.values())
error = filter(lambda s: s['status'] in ('ERROR'),
instances.values())
success = filter(lambda s: s['status'] in ('', 'DELETED'),
instances.values())
instances = [rax_to_dict(s, 'server') for s in kept]
results = {
'changed': changed,
'action': 'delete',
'instances': instances,
'success': success,
'error': error,
'timeout': timeout,
'instance_ids': {
'instances': [i['id'] for i in instances],
'success': [i['id'] for i in success],
'error': [i['id'] for i in error],
'timeout': [i['id'] for i in timeout]
}
}
if timeout:
results['msg'] = 'Timeout waiting for all servers to delete'
elif error:
results['msg'] = 'Failed to delete all servers'
if 'msg' in results:
module.fail_json(**results)
else:
module.exit_json(**results)
def cloudservers(module, state=None, name=None, flavor=None, image=None,
meta=None, key_name=None, files=None, wait=True, wait_timeout=300,
disk_config=None, count=1, group=None, instance_ids=None,
exact_count=False, networks=None, count_offset=0,
auto_increment=False, extra_create_args=None, user_data=None,
config_drive=False, boot_from_volume=False,
boot_volume=None, boot_volume_size=None,
boot_volume_terminate=False):
meta = {} if meta is None else meta
files = {} if files is None else files
instance_ids = [] if instance_ids is None else instance_ids
networks = [] if networks is None else networks
extra_create_args = {} if extra_create_args is None else extra_create_args
cs = pyrax.cloudservers
cnw = pyrax.cloud_networks
if not cnw:
module.fail_json(msg='Failed to instantiate client. This '
'typically indicates an invalid region or an '
'incorrectly capitalized region name.')
if state == 'present' or (state == 'absent' and instance_ids is None):
if not boot_from_volume and not boot_volume and not image:
module.fail_json(msg='image is required for the "rax" module')
for arg, value in dict(name=name, flavor=flavor).items():
if not value:
module.fail_json(msg='%s is required for the "rax" module' %
arg)
if boot_from_volume and not image and not boot_volume:
module.fail_json(msg='image or boot_volume are required for the '
'"rax" with boot_from_volume')
if boot_from_volume and image and not boot_volume_size:
module.fail_json(msg='boot_volume_size is required for the "rax" '
'module with boot_from_volume and image')
if boot_from_volume and image and boot_volume:
image = None
servers = []
# Add the group meta key
if group and 'group' not in meta:
meta['group'] = group
elif 'group' in meta and group is None:
group = meta['group']
# Normalize and ensure all metadata values are strings
for k, v in meta.items():
if isinstance(v, list):
meta[k] = ','.join(['%s' % i for i in v])
elif isinstance(v, dict):
meta[k] = json.dumps(v)
elif not isinstance(v, string_types):
meta[k] = '%s' % v
# When using state=absent with group, the absent block won't match the
# names properly. Use the exact_count functionality to decrease the count
# to the desired level
was_absent = False
if group is not None and state == 'absent':
exact_count = True
state = 'present'
was_absent = True
if image:
image = rax_find_image(module, pyrax, image)
nics = []
if networks:
for network in networks:
nics.extend(rax_find_network(module, pyrax, network))
# act on the state
if state == 'present':
# Idempotent ensurance of a specific count of servers
if exact_count is not False:
# See if we can find servers that match our options
if group is None:
module.fail_json(msg='"group" must be provided when using '
'"exact_count"')
if auto_increment:
numbers = set()
# See if the name is a printf like string, if not append
# %d to the end
try:
name % 0
except TypeError as e:
if e.message.startswith('not all'):
name = '%s%%d' % name
else:
module.fail_json(msg=e.message)
# regex pattern to match printf formatting
pattern = re.sub(r'%\d*[sd]', r'(\d+)', name)
for server in cs.servers.list():
# Ignore DELETED servers
if server.status == 'DELETED':
continue
if server.metadata.get('group') == group:
servers.append(server)
match = re.search(pattern, server.name)
if match:
number = int(match.group(1))
numbers.add(number)
number_range = xrange(count_offset, count_offset + count)
available_numbers = list(set(number_range)
.difference(numbers))
else: # Not auto incrementing
for server in cs.servers.list():
# Ignore DELETED servers
if server.status == 'DELETED':
continue
if server.metadata.get('group') == group:
servers.append(server)
# available_numbers not needed here, we inspect auto_increment
# again later
# If state was absent but the count was changed,
# assume we only wanted to remove that number of instances
if was_absent:
diff = len(servers) - count
if diff < 0:
count = 0
else:
count = diff
if len(servers) > count:
# We have more servers than we need, set state='absent'
# and delete the extras, this should delete the oldest
state = 'absent'
kept = servers[:count]
del servers[:count]
instance_ids = []
for server in servers:
instance_ids.append(server.id)
delete(module, instance_ids=instance_ids, wait=wait,
wait_timeout=wait_timeout, kept=kept)
elif len(servers) < count:
# we have fewer servers than we need
if auto_increment:
# auto incrementing server numbers
names = []
name_slice = count - len(servers)
numbers_to_use = available_numbers[:name_slice]
for number in numbers_to_use:
names.append(name % number)
else:
# We are not auto incrementing server numbers,
# create a list of 'name' that matches how many we need
names = [name] * (count - len(servers))
else:
# we have the right number of servers, just return info
# about all of the matched servers
instances = []
instance_ids = []
for server in servers:
instances.append(rax_to_dict(server, 'server'))
instance_ids.append(server.id)
module.exit_json(changed=False, action=None,
instances=instances,
success=[], error=[], timeout=[],
instance_ids={'instances': instance_ids,
'success': [], 'error': [],
'timeout': []})
else: # not called with exact_count=True
if group is not None:
if auto_increment:
# we are auto incrementing server numbers, but not with
# exact_count
numbers = set()
# See if the name is a printf like string, if not append
# %d to the end
try:
name % 0
except TypeError as e:
if e.message.startswith('not all'):
name = '%s%%d' % name
else:
module.fail_json(msg=e.message)
# regex pattern to match printf formatting
pattern = re.sub(r'%\d*[sd]', r'(\d+)', name)
for server in cs.servers.list():
# Ignore DELETED servers
if server.status == 'DELETED':
continue
if server.metadata.get('group') == group:
servers.append(server)
match = re.search(pattern, server.name)
if match:
number = int(match.group(1))
numbers.add(number)
number_range = xrange(count_offset,
count_offset + count + len(numbers))
available_numbers = list(set(number_range)
.difference(numbers))
names = []
numbers_to_use = available_numbers[:count]
for number in numbers_to_use:
names.append(name % number)
else:
# Not auto incrementing
names = [name] * count
else:
# No group was specified, and not using exact_count
# Perform more simplistic matching
search_opts = {
'name': '^%s$' % name,
'flavor': flavor
}
servers = []
for server in cs.servers.list(search_opts=search_opts):
# Ignore DELETED servers
if server.status == 'DELETED':
continue
if not rax_find_server_image(module, server, image,
boot_volume):
continue
# Ignore servers with non matching metadata
if server.metadata != meta:
continue
servers.append(server)
if len(servers) >= count:
# We have more servers than were requested, don't do
# anything. Not running with exact_count=True, so we assume
# more is OK
instances = []
for server in servers:
instances.append(rax_to_dict(server, 'server'))
instance_ids = [i['id'] for i in instances]
module.exit_json(changed=False, action=None,
instances=instances, success=[], error=[],
timeout=[],
instance_ids={'instances': instance_ids,
'success': [], 'error': [],
'timeout': []})
# We need more servers to reach out target, create names for
# them, we aren't performing auto_increment here
names = [name] * (count - len(servers))
block_device_mapping_v2 = []
if boot_from_volume:
mapping = {
'boot_index': '0',
'delete_on_termination': boot_volume_terminate,
'destination_type': 'volume',
}
if image:
mapping.update({
'uuid': image,
'source_type': 'image',
'volume_size': boot_volume_size,
})
image = None
elif boot_volume:
volume = rax_find_volume(module, pyrax, boot_volume)
mapping.update({
'uuid': pyrax.utils.get_id(volume),
'source_type': 'volume',
})
block_device_mapping_v2.append(mapping)
create(module, names=names, flavor=flavor, image=image,
meta=meta, key_name=key_name, files=files, wait=wait,
wait_timeout=wait_timeout, disk_config=disk_config, group=group,
nics=nics, extra_create_args=extra_create_args,
user_data=user_data, config_drive=config_drive,
existing=servers,
block_device_mapping_v2=block_device_mapping_v2)
elif state == 'absent':
if instance_ids is None:
# We weren't given an explicit list of server IDs to delete
# Let's match instead
search_opts = {
'name': '^%s$' % name,
'flavor': flavor
}
for server in cs.servers.list(search_opts=search_opts):
# Ignore DELETED servers
if server.status == 'DELETED':
continue
if not rax_find_server_image(module, server, image,
boot_volume):
continue
# Ignore servers with non matching metadata
if meta != server.metadata:
continue
servers.append(server)
# Build a list of server IDs to delete
instance_ids = []
for server in servers:
if len(instance_ids) < count:
instance_ids.append(server.id)
else:
break
if not instance_ids:
# No server IDs were matched for deletion, or no IDs were
# explicitly provided, just exit and don't do anything
module.exit_json(changed=False, action=None, instances=[],
success=[], error=[], timeout=[],
instance_ids={'instances': [],
'success': [], 'error': [],
'timeout': []})
delete(module, instance_ids=instance_ids, wait=wait,
wait_timeout=wait_timeout)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
auto_increment=dict(default=True, type='bool'),
boot_from_volume=dict(default=False, type='bool'),
boot_volume=dict(type='str'),
boot_volume_size=dict(type='int', default=100),
boot_volume_terminate=dict(type='bool', default=False),
config_drive=dict(default=False, type='bool'),
count=dict(default=1, type='int'),
count_offset=dict(default=1, type='int'),
disk_config=dict(choices=['auto', 'manual']),
exact_count=dict(default=False, type='bool'),
extra_client_args=dict(type='dict', default={}),
extra_create_args=dict(type='dict', default={}),
files=dict(type='dict', default={}),
flavor=dict(),
group=dict(),
image=dict(),
instance_ids=dict(type='list', elements='str'),
key_name=dict(aliases=['keypair']),
meta=dict(type='dict', default={}),
name=dict(),
networks=dict(type='list', elements='str', default=['public', 'private']),
state=dict(default='present', choices=['present', 'absent']),
user_data=dict(no_log=True),
wait=dict(default=False, type='bool'),
wait_timeout=dict(default=300, type='int'),
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together(),
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
auto_increment = module.params.get('auto_increment')
boot_from_volume = module.params.get('boot_from_volume')
boot_volume = module.params.get('boot_volume')
boot_volume_size = module.params.get('boot_volume_size')
boot_volume_terminate = module.params.get('boot_volume_terminate')
config_drive = module.params.get('config_drive')
count = module.params.get('count')
count_offset = module.params.get('count_offset')
disk_config = module.params.get('disk_config')
if disk_config:
disk_config = disk_config.upper()
exact_count = module.params.get('exact_count', False)
extra_client_args = module.params.get('extra_client_args')
extra_create_args = module.params.get('extra_create_args')
files = module.params.get('files')
flavor = module.params.get('flavor')
group = module.params.get('group')
image = module.params.get('image')
instance_ids = module.params.get('instance_ids')
key_name = module.params.get('key_name')
meta = module.params.get('meta')
name = module.params.get('name')
networks = module.params.get('networks')
state = module.params.get('state')
user_data = module.params.get('user_data')
wait = module.params.get('wait')
wait_timeout = int(module.params.get('wait_timeout'))
setup_rax_module(module, pyrax)
if extra_client_args:
pyrax.cloudservers = pyrax.connect_to_cloudservers(
region=pyrax.cloudservers.client.region_name,
**extra_client_args)
client = pyrax.cloudservers.client
if 'bypass_url' in extra_client_args:
client.management_url = extra_client_args['bypass_url']
if pyrax.cloudservers is None:
module.fail_json(msg='Failed to instantiate client. This '
'typically indicates an invalid region or an '
'incorrectly capitalized region name.')
cloudservers(module, state=state, name=name, flavor=flavor,
image=image, meta=meta, key_name=key_name, files=files,
wait=wait, wait_timeout=wait_timeout, disk_config=disk_config,
count=count, group=group, instance_ids=instance_ids,
exact_count=exact_count, networks=networks,
count_offset=count_offset, auto_increment=auto_increment,
extra_create_args=extra_create_args, user_data=user_data,
config_drive=config_drive, boot_from_volume=boot_from_volume,
boot_volume=boot_volume, boot_volume_size=boot_volume_size,
boot_volume_terminate=boot_volume_terminate)
if __name__ == '__main__':
main()

View file

@ -1,235 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: rax_cbs
short_description: Manipulate Rackspace Cloud Block Storage Volumes
description:
- Manipulate Rackspace Cloud Block Storage Volumes
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
description:
type: str
description:
- Description to give the volume being created.
image:
type: str
description:
- Image to use for bootable volumes. Can be an C(id), C(human_id) or
C(name). This option requires C(pyrax>=1.9.3).
meta:
type: dict
default: {}
description:
- A hash of metadata to associate with the volume.
name:
type: str
description:
- Name to give the volume being created.
required: true
size:
type: int
description:
- Size of the volume to create in Gigabytes.
default: 100
snapshot_id:
type: str
description:
- The id of the snapshot to create the volume from.
state:
type: str
description:
- Indicate desired state of the resource.
choices:
- present
- absent
default: present
volume_type:
type: str
description:
- Type of the volume being created.
choices:
- SATA
- SSD
default: SATA
wait:
description:
- Wait for the volume to be in state C(available) before returning.
type: bool
default: false
wait_timeout:
type: int
description:
- how long before wait gives up, in seconds.
default: 300
author:
- "Christopher H. Laco (@claco)"
- "Matt Martz (@sivel)"
extends_documentation_fragment:
- community.general.rackspace.openstack
- community.general.attributes
'''
EXAMPLES = '''
- name: Build a Block Storage Volume
gather_facts: false
hosts: local
connection: local
tasks:
- name: Storage volume create request
local_action:
module: rax_cbs
credentials: ~/.raxpub
name: my-volume
description: My Volume
volume_type: SSD
size: 150
region: DFW
wait: true
state: present
meta:
app: my-cool-app
register: my_volume
'''
from ansible_collections.community.general.plugins.module_utils.version import LooseVersion
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.rax import (VOLUME_STATUS, rax_argument_spec, rax_find_image, rax_find_volume,
rax_required_together, rax_to_dict, setup_rax_module)
def cloud_block_storage(module, state, name, description, meta, size,
snapshot_id, volume_type, wait, wait_timeout,
image):
changed = False
volume = None
instance = {}
cbs = pyrax.cloud_blockstorage
if cbs is None:
module.fail_json(msg='Failed to instantiate client. This '
'typically indicates an invalid region or an '
'incorrectly capitalized region name.')
if image:
# pyrax<1.9.3 did not have support for specifying an image when
# creating a volume which is required for bootable volumes
if LooseVersion(pyrax.version.version) < LooseVersion('1.9.3'):
module.fail_json(msg='Creating a bootable volume requires '
'pyrax>=1.9.3')
image = rax_find_image(module, pyrax, image)
volume = rax_find_volume(module, pyrax, name)
if state == 'present':
if not volume:
kwargs = dict()
if image:
kwargs['image'] = image
try:
volume = cbs.create(name, size=size, volume_type=volume_type,
description=description,
metadata=meta,
snapshot_id=snapshot_id, **kwargs)
changed = True
except Exception as e:
module.fail_json(msg='%s' % e.message)
else:
if wait:
attempts = wait_timeout // 5
pyrax.utils.wait_for_build(volume, interval=5,
attempts=attempts)
volume.get()
instance = rax_to_dict(volume)
result = dict(changed=changed, volume=instance)
if volume.status == 'error':
result['msg'] = '%s failed to build' % volume.id
elif wait and volume.status not in VOLUME_STATUS:
result['msg'] = 'Timeout waiting on %s' % volume.id
if 'msg' in result:
module.fail_json(**result)
else:
module.exit_json(**result)
elif state == 'absent':
if volume:
instance = rax_to_dict(volume)
try:
volume.delete()
changed = True
except Exception as e:
module.fail_json(msg='%s' % e.message)
module.exit_json(changed=changed, volume=instance)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
description=dict(type='str'),
image=dict(type='str'),
meta=dict(type='dict', default={}),
name=dict(required=True),
size=dict(type='int', default=100),
snapshot_id=dict(),
state=dict(default='present', choices=['present', 'absent']),
volume_type=dict(choices=['SSD', 'SATA'], default='SATA'),
wait=dict(type='bool', default=False),
wait_timeout=dict(type='int', default=300)
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together()
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
description = module.params.get('description')
image = module.params.get('image')
meta = module.params.get('meta')
name = module.params.get('name')
size = module.params.get('size')
snapshot_id = module.params.get('snapshot_id')
state = module.params.get('state')
volume_type = module.params.get('volume_type')
wait = module.params.get('wait')
wait_timeout = module.params.get('wait_timeout')
setup_rax_module(module, pyrax)
cloud_block_storage(module, state, name, description, meta, size,
snapshot_id, volume_type, wait, wait_timeout,
image)
if __name__ == '__main__':
main()

View file

@ -1,226 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: rax_cbs_attachments
short_description: Manipulate Rackspace Cloud Block Storage Volume Attachments
description:
- Manipulate Rackspace Cloud Block Storage Volume Attachments
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
device:
type: str
description:
- The device path to attach the volume to, e.g. /dev/xvde.
- Before 2.4 this was a required field. Now it can be left to null to auto assign the device name.
volume:
type: str
description:
- Name or id of the volume to attach/detach
required: true
server:
type: str
description:
- Name or id of the server to attach/detach
required: true
state:
type: str
description:
- Indicate desired state of the resource
choices:
- present
- absent
default: present
wait:
description:
- wait for the volume to be in 'in-use'/'available' state before returning
type: bool
default: false
wait_timeout:
type: int
description:
- how long before wait gives up, in seconds
default: 300
author:
- "Christopher H. Laco (@claco)"
- "Matt Martz (@sivel)"
extends_documentation_fragment:
- community.general.rackspace.openstack
- community.general.attributes
'''
EXAMPLES = '''
- name: Attach a Block Storage Volume
gather_facts: false
hosts: local
connection: local
tasks:
- name: Storage volume attach request
local_action:
module: rax_cbs_attachments
credentials: ~/.raxpub
volume: my-volume
server: my-server
device: /dev/xvdd
region: DFW
wait: true
state: present
register: my_volume
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.rax import (NON_CALLABLES,
rax_argument_spec,
rax_find_server,
rax_find_volume,
rax_required_together,
rax_to_dict,
setup_rax_module,
)
def cloud_block_storage_attachments(module, state, volume, server, device,
wait, wait_timeout):
cbs = pyrax.cloud_blockstorage
cs = pyrax.cloudservers
if cbs is None or cs is None:
module.fail_json(msg='Failed to instantiate client. This '
'typically indicates an invalid region or an '
'incorrectly capitalized region name.')
changed = False
instance = {}
volume = rax_find_volume(module, pyrax, volume)
if not volume:
module.fail_json(msg='No matching storage volumes were found')
if state == 'present':
server = rax_find_server(module, pyrax, server)
if (volume.attachments and
volume.attachments[0]['server_id'] == server.id):
changed = False
elif volume.attachments:
module.fail_json(msg='Volume is attached to another server')
else:
try:
volume.attach_to_instance(server, mountpoint=device)
changed = True
except Exception as e:
module.fail_json(msg='%s' % e.message)
volume.get()
for key, value in vars(volume).items():
if (isinstance(value, NON_CALLABLES) and
not key.startswith('_')):
instance[key] = value
result = dict(changed=changed)
if volume.status == 'error':
result['msg'] = '%s failed to build' % volume.id
elif wait:
attempts = wait_timeout // 5
pyrax.utils.wait_until(volume, 'status', 'in-use',
interval=5, attempts=attempts)
volume.get()
result['volume'] = rax_to_dict(volume)
if 'msg' in result:
module.fail_json(**result)
else:
module.exit_json(**result)
elif state == 'absent':
server = rax_find_server(module, pyrax, server)
if (volume.attachments and
volume.attachments[0]['server_id'] == server.id):
try:
volume.detach()
if wait:
pyrax.utils.wait_until(volume, 'status', 'available',
interval=3, attempts=0,
verbose=False)
changed = True
except Exception as e:
module.fail_json(msg='%s' % e.message)
volume.get()
changed = True
elif volume.attachments:
module.fail_json(msg='Volume is attached to another server')
result = dict(changed=changed, volume=rax_to_dict(volume))
if volume.status == 'error':
result['msg'] = '%s failed to build' % volume.id
if 'msg' in result:
module.fail_json(**result)
else:
module.exit_json(**result)
module.exit_json(changed=changed, volume=instance)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
device=dict(required=False),
volume=dict(required=True),
server=dict(required=True),
state=dict(default='present', choices=['present', 'absent']),
wait=dict(type='bool', default=False),
wait_timeout=dict(type='int', default=300)
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together()
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
device = module.params.get('device')
volume = module.params.get('volume')
server = module.params.get('server')
state = module.params.get('state')
wait = module.params.get('wait')
wait_timeout = module.params.get('wait_timeout')
setup_rax_module(module, pyrax)
cloud_block_storage_attachments(module, state, volume, server, device,
wait, wait_timeout)
if __name__ == '__main__':
main()

View file

@ -1,266 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: rax_cdb
short_description: Create/delete or resize a Rackspace Cloud Databases instance
description:
- creates / deletes or resize a Rackspace Cloud Databases instance
and optionally waits for it to be 'running'. The name option needs to be
unique since it's used to identify the instance.
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
name:
type: str
description:
- Name of the databases server instance
required: true
flavor:
type: int
description:
- flavor to use for the instance 1 to 6 (i.e. 512MB to 16GB)
default: 1
volume:
type: int
description:
- Volume size of the database 1-150GB
default: 2
cdb_type:
type: str
description:
- type of instance (i.e. MySQL, MariaDB, Percona)
default: MySQL
aliases: ['type']
cdb_version:
type: str
description:
- version of database (MySQL supports 5.1 and 5.6, MariaDB supports 10, Percona supports 5.6)
- "The available choices are: V(5.1), V(5.6) and V(10)."
default: '5.6'
aliases: ['version']
state:
type: str
description:
- Indicate desired state of the resource
choices: ['present', 'absent']
default: present
wait:
description:
- wait for the instance to be in state 'running' before returning
type: bool
default: false
wait_timeout:
type: int
description:
- how long before wait gives up, in seconds
default: 300
author: "Simon JAILLET (@jails)"
extends_documentation_fragment:
- community.general.rackspace
- community.general.rackspace.openstack
- community.general.attributes
'''
EXAMPLES = '''
- name: Build a Cloud Databases
gather_facts: false
tasks:
- name: Server build request
local_action:
module: rax_cdb
credentials: ~/.raxpub
region: IAD
name: db-server1
flavor: 1
volume: 2
cdb_type: MySQL
cdb_version: 5.6
wait: true
state: present
register: rax_db_server
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.rax import rax_argument_spec, rax_required_together, rax_to_dict, setup_rax_module
def find_instance(name):
cdb = pyrax.cloud_databases
instances = cdb.list()
if instances:
for instance in instances:
if instance.name == name:
return instance
return False
def save_instance(module, name, flavor, volume, cdb_type, cdb_version, wait,
wait_timeout):
for arg, value in dict(name=name, flavor=flavor,
volume=volume, type=cdb_type, version=cdb_version
).items():
if not value:
module.fail_json(msg='%s is required for the "rax_cdb"'
' module' % arg)
if not (volume >= 1 and volume <= 150):
module.fail_json(msg='volume is required to be between 1 and 150')
cdb = pyrax.cloud_databases
flavors = []
for item in cdb.list_flavors():
flavors.append(item.id)
if not (flavor in flavors):
module.fail_json(msg='unexisting flavor reference "%s"' % str(flavor))
changed = False
instance = find_instance(name)
if not instance:
action = 'create'
try:
instance = cdb.create(name=name, flavor=flavor, volume=volume,
type=cdb_type, version=cdb_version)
except Exception as e:
module.fail_json(msg='%s' % e.message)
else:
changed = True
else:
action = None
if instance.volume.size != volume:
action = 'resize'
if instance.volume.size > volume:
module.fail_json(changed=False, action=action,
msg='The new volume size must be larger than '
'the current volume size',
cdb=rax_to_dict(instance))
instance.resize_volume(volume)
changed = True
if int(instance.flavor.id) != flavor:
action = 'resize'
pyrax.utils.wait_until(instance, 'status', 'ACTIVE',
attempts=wait_timeout)
instance.resize(flavor)
changed = True
if wait:
pyrax.utils.wait_until(instance, 'status', 'ACTIVE',
attempts=wait_timeout)
if wait and instance.status != 'ACTIVE':
module.fail_json(changed=changed, action=action,
cdb=rax_to_dict(instance),
msg='Timeout waiting for "%s" databases instance to '
'be created' % name)
module.exit_json(changed=changed, action=action, cdb=rax_to_dict(instance))
def delete_instance(module, name, wait, wait_timeout):
if not name:
module.fail_json(msg='name is required for the "rax_cdb" module')
changed = False
instance = find_instance(name)
if not instance:
module.exit_json(changed=False, action='delete')
try:
instance.delete()
except Exception as e:
module.fail_json(msg='%s' % e.message)
else:
changed = True
if wait:
pyrax.utils.wait_until(instance, 'status', 'SHUTDOWN',
attempts=wait_timeout)
if wait and instance.status != 'SHUTDOWN':
module.fail_json(changed=changed, action='delete',
cdb=rax_to_dict(instance),
msg='Timeout waiting for "%s" databases instance to '
'be deleted' % name)
module.exit_json(changed=changed, action='delete',
cdb=rax_to_dict(instance))
def rax_cdb(module, state, name, flavor, volume, cdb_type, cdb_version, wait,
wait_timeout):
# act on the state
if state == 'present':
save_instance(module, name, flavor, volume, cdb_type, cdb_version, wait,
wait_timeout)
elif state == 'absent':
delete_instance(module, name, wait, wait_timeout)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
name=dict(type='str', required=True),
flavor=dict(type='int', default=1),
volume=dict(type='int', default=2),
cdb_type=dict(type='str', default='MySQL', aliases=['type']),
cdb_version=dict(type='str', default='5.6', aliases=['version']),
state=dict(default='present', choices=['present', 'absent']),
wait=dict(type='bool', default=False),
wait_timeout=dict(type='int', default=300),
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together(),
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
name = module.params.get('name')
flavor = module.params.get('flavor')
volume = module.params.get('volume')
cdb_type = module.params.get('cdb_type')
cdb_version = module.params.get('cdb_version')
state = module.params.get('state')
wait = module.params.get('wait')
wait_timeout = module.params.get('wait_timeout')
setup_rax_module(module, pyrax)
rax_cdb(module, state, name, flavor, volume, cdb_type, cdb_version, wait, wait_timeout)
if __name__ == '__main__':
main()

View file

@ -1,179 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
module: rax_cdb_database
short_description: Create / delete a database in the Cloud Databases
description:
- create / delete a database in the Cloud Databases.
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
cdb_id:
type: str
description:
- The databases server UUID
required: true
name:
type: str
description:
- Name to give to the database
required: true
character_set:
type: str
description:
- Set of symbols and encodings
default: 'utf8'
collate:
type: str
description:
- Set of rules for comparing characters in a character set
default: 'utf8_general_ci'
state:
type: str
description:
- Indicate desired state of the resource
choices: ['present', 'absent']
default: present
author: "Simon JAILLET (@jails)"
extends_documentation_fragment:
- community.general.rackspace
- community.general.rackspace.openstack
- community.general.attributes
'''
EXAMPLES = '''
- name: Build a database in Cloud Databases
tasks:
- name: Database build request
local_action:
module: rax_cdb_database
credentials: ~/.raxpub
region: IAD
cdb_id: 323e7ce0-9cb0-11e3-a5e2-0800200c9a66
name: db1
state: present
register: rax_db_database
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.rax import rax_argument_spec, rax_required_together, rax_to_dict, setup_rax_module
def find_database(instance, name):
try:
database = instance.get_database(name)
except Exception:
return False
return database
def save_database(module, cdb_id, name, character_set, collate):
cdb = pyrax.cloud_databases
try:
instance = cdb.get(cdb_id)
except Exception as e:
module.fail_json(msg='%s' % e.message)
changed = False
database = find_database(instance, name)
if not database:
try:
database = instance.create_database(name=name,
character_set=character_set,
collate=collate)
except Exception as e:
module.fail_json(msg='%s' % e.message)
else:
changed = True
module.exit_json(changed=changed, action='create',
database=rax_to_dict(database))
def delete_database(module, cdb_id, name):
cdb = pyrax.cloud_databases
try:
instance = cdb.get(cdb_id)
except Exception as e:
module.fail_json(msg='%s' % e.message)
changed = False
database = find_database(instance, name)
if database:
try:
database.delete()
except Exception as e:
module.fail_json(msg='%s' % e.message)
else:
changed = True
module.exit_json(changed=changed, action='delete',
database=rax_to_dict(database))
def rax_cdb_database(module, state, cdb_id, name, character_set, collate):
# act on the state
if state == 'present':
save_database(module, cdb_id, name, character_set, collate)
elif state == 'absent':
delete_database(module, cdb_id, name)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
cdb_id=dict(type='str', required=True),
name=dict(type='str', required=True),
character_set=dict(type='str', default='utf8'),
collate=dict(type='str', default='utf8_general_ci'),
state=dict(default='present', choices=['present', 'absent'])
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together(),
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
cdb_id = module.params.get('cdb_id')
name = module.params.get('name')
character_set = module.params.get('character_set')
collate = module.params.get('collate')
state = module.params.get('state')
setup_rax_module(module, pyrax)
rax_cdb_database(module, state, cdb_id, name, character_set, collate)
if __name__ == '__main__':
main()

View file

@ -1,227 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: rax_cdb_user
short_description: Create / delete a Rackspace Cloud Database
description:
- create / delete a database in the Cloud Databases.
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
cdb_id:
type: str
description:
- The databases server UUID
required: true
db_username:
type: str
description:
- Name of the database user
required: true
db_password:
type: str
description:
- Database user password
required: true
databases:
type: list
elements: str
description:
- Name of the databases that the user can access
default: []
host:
type: str
description:
- Specifies the host from which a user is allowed to connect to
the database. Possible values are a string containing an IPv4 address
or "%" to allow connecting from any host
default: '%'
state:
type: str
description:
- Indicate desired state of the resource
choices: ['present', 'absent']
default: present
author: "Simon JAILLET (@jails)"
extends_documentation_fragment:
- community.general.rackspace
- community.general.rackspace.openstack
- community.general.attributes
'''
EXAMPLES = '''
- name: Build a user in Cloud Databases
tasks:
- name: User build request
local_action:
module: rax_cdb_user
credentials: ~/.raxpub
region: IAD
cdb_id: 323e7ce0-9cb0-11e3-a5e2-0800200c9a66
db_username: user1
db_password: user1
databases: ['db1']
state: present
register: rax_db_user
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.text.converters import to_text
from ansible_collections.community.general.plugins.module_utils.rax import rax_argument_spec, rax_required_together, rax_to_dict, setup_rax_module
def find_user(instance, name):
try:
user = instance.get_user(name)
except Exception:
return False
return user
def save_user(module, cdb_id, name, password, databases, host):
for arg, value in dict(cdb_id=cdb_id, name=name).items():
if not value:
module.fail_json(msg='%s is required for the "rax_cdb_user" '
'module' % arg)
cdb = pyrax.cloud_databases
try:
instance = cdb.get(cdb_id)
except Exception as e:
module.fail_json(msg='%s' % e.message)
changed = False
user = find_user(instance, name)
if not user:
action = 'create'
try:
user = instance.create_user(name=name,
password=password,
database_names=databases,
host=host)
except Exception as e:
module.fail_json(msg='%s' % e.message)
else:
changed = True
else:
action = 'update'
if user.host != host:
changed = True
user.update(password=password, host=host)
former_dbs = set([item.name for item in user.list_user_access()])
databases = set(databases)
if databases != former_dbs:
try:
revoke_dbs = [db for db in former_dbs if db not in databases]
user.revoke_user_access(db_names=revoke_dbs)
new_dbs = [db for db in databases if db not in former_dbs]
user.grant_user_access(db_names=new_dbs)
except Exception as e:
module.fail_json(msg='%s' % e.message)
else:
changed = True
module.exit_json(changed=changed, action=action, user=rax_to_dict(user))
def delete_user(module, cdb_id, name):
for arg, value in dict(cdb_id=cdb_id, name=name).items():
if not value:
module.fail_json(msg='%s is required for the "rax_cdb_user"'
' module' % arg)
cdb = pyrax.cloud_databases
try:
instance = cdb.get(cdb_id)
except Exception as e:
module.fail_json(msg='%s' % e.message)
changed = False
user = find_user(instance, name)
if user:
try:
user.delete()
except Exception as e:
module.fail_json(msg='%s' % e.message)
else:
changed = True
module.exit_json(changed=changed, action='delete')
def rax_cdb_user(module, state, cdb_id, name, password, databases, host):
# act on the state
if state == 'present':
save_user(module, cdb_id, name, password, databases, host)
elif state == 'absent':
delete_user(module, cdb_id, name)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
cdb_id=dict(type='str', required=True),
db_username=dict(type='str', required=True),
db_password=dict(type='str', required=True, no_log=True),
databases=dict(type='list', elements='str', default=[]),
host=dict(type='str', default='%'),
state=dict(default='present', choices=['present', 'absent'])
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together(),
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
cdb_id = module.params.get('cdb_id')
name = module.params.get('db_username')
password = module.params.get('db_password')
databases = module.params.get('databases')
host = to_text(module.params.get('host'), errors='surrogate_or_strict')
state = module.params.get('state')
setup_rax_module(module, pyrax)
rax_cdb_user(module, state, cdb_id, name, password, databases, host)
if __name__ == '__main__':
main()

View file

@ -1,320 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: rax_clb
short_description: Create / delete a load balancer in Rackspace Public Cloud
description:
- creates / deletes a Rackspace Public Cloud load balancer.
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
algorithm:
type: str
description:
- algorithm for the balancer being created
choices:
- RANDOM
- LEAST_CONNECTIONS
- ROUND_ROBIN
- WEIGHTED_LEAST_CONNECTIONS
- WEIGHTED_ROUND_ROBIN
default: LEAST_CONNECTIONS
meta:
type: dict
default: {}
description:
- A hash of metadata to associate with the instance
name:
type: str
description:
- Name to give the load balancer
required: true
port:
type: int
description:
- Port for the balancer being created
default: 80
protocol:
type: str
description:
- Protocol for the balancer being created
choices:
- DNS_TCP
- DNS_UDP
- FTP
- HTTP
- HTTPS
- IMAPS
- IMAPv4
- LDAP
- LDAPS
- MYSQL
- POP3
- POP3S
- SMTP
- TCP
- TCP_CLIENT_FIRST
- UDP
- UDP_STREAM
- SFTP
default: HTTP
state:
type: str
description:
- Indicate desired state of the resource
choices:
- present
- absent
default: present
timeout:
type: int
description:
- timeout for communication between the balancer and the node
default: 30
type:
type: str
description:
- type of interface for the balancer being created
choices:
- PUBLIC
- SERVICENET
default: PUBLIC
vip_id:
type: str
description:
- Virtual IP ID to use when creating the load balancer for purposes of
sharing an IP with another load balancer of another protocol
wait:
description:
- wait for the balancer to be in state 'running' before returning
type: bool
default: false
wait_timeout:
type: int
description:
- how long before wait gives up, in seconds
default: 300
author:
- "Christopher H. Laco (@claco)"
- "Matt Martz (@sivel)"
extends_documentation_fragment:
- community.general.rackspace
- community.general.rackspace.openstack
- community.general.attributes
'''
EXAMPLES = '''
- name: Build a Load Balancer
gather_facts: false
hosts: local
connection: local
tasks:
- name: Load Balancer create request
local_action:
module: rax_clb
credentials: ~/.raxpub
name: my-lb
port: 8080
protocol: HTTP
type: SERVICENET
timeout: 30
region: DFW
wait: true
state: present
meta:
app: my-cool-app
register: my_lb
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.rax import (CLB_ALGORITHMS,
CLB_PROTOCOLS,
rax_argument_spec,
rax_required_together,
rax_to_dict,
setup_rax_module,
)
def cloud_load_balancer(module, state, name, meta, algorithm, port, protocol,
vip_type, timeout, wait, wait_timeout, vip_id):
if int(timeout) < 30:
module.fail_json(msg='"timeout" must be greater than or equal to 30')
changed = False
balancers = []
clb = pyrax.cloud_loadbalancers
if not clb:
module.fail_json(msg='Failed to instantiate client. This '
'typically indicates an invalid region or an '
'incorrectly capitalized region name.')
balancer_list = clb.list()
while balancer_list:
retrieved = clb.list(marker=balancer_list.pop().id)
balancer_list.extend(retrieved)
if len(retrieved) < 2:
break
for balancer in balancer_list:
if name != balancer.name and name != balancer.id:
continue
balancers.append(balancer)
if len(balancers) > 1:
module.fail_json(msg='Multiple Load Balancers were matched by name, '
'try using the Load Balancer ID instead')
if state == 'present':
if isinstance(meta, dict):
metadata = [dict(key=k, value=v) for k, v in meta.items()]
if not balancers:
try:
virtual_ips = [clb.VirtualIP(type=vip_type, id=vip_id)]
balancer = clb.create(name, metadata=metadata, port=port,
algorithm=algorithm, protocol=protocol,
timeout=timeout, virtual_ips=virtual_ips)
changed = True
except Exception as e:
module.fail_json(msg='%s' % e.message)
else:
balancer = balancers[0]
setattr(balancer, 'metadata',
[dict(key=k, value=v) for k, v in
balancer.get_metadata().items()])
atts = {
'name': name,
'algorithm': algorithm,
'port': port,
'protocol': protocol,
'timeout': timeout
}
for att, value in atts.items():
current = getattr(balancer, att)
if current != value:
changed = True
if changed:
balancer.update(**atts)
if balancer.metadata != metadata:
balancer.set_metadata(meta)
changed = True
virtual_ips = [clb.VirtualIP(type=vip_type)]
current_vip_types = set([v.type for v in balancer.virtual_ips])
vip_types = set([v.type for v in virtual_ips])
if current_vip_types != vip_types:
module.fail_json(msg='Load balancer Virtual IP type cannot '
'be changed')
if wait:
attempts = wait_timeout // 5
pyrax.utils.wait_for_build(balancer, interval=5, attempts=attempts)
balancer.get()
instance = rax_to_dict(balancer, 'clb')
result = dict(changed=changed, balancer=instance)
if balancer.status == 'ERROR':
result['msg'] = '%s failed to build' % balancer.id
elif wait and balancer.status not in ('ACTIVE', 'ERROR'):
result['msg'] = 'Timeout waiting on %s' % balancer.id
if 'msg' in result:
module.fail_json(**result)
else:
module.exit_json(**result)
elif state == 'absent':
if balancers:
balancer = balancers[0]
try:
balancer.delete()
changed = True
except Exception as e:
module.fail_json(msg='%s' % e.message)
instance = rax_to_dict(balancer, 'clb')
if wait:
attempts = wait_timeout // 5
pyrax.utils.wait_until(balancer, 'status', ('DELETED'),
interval=5, attempts=attempts)
else:
instance = {}
module.exit_json(changed=changed, balancer=instance)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
algorithm=dict(choices=CLB_ALGORITHMS,
default='LEAST_CONNECTIONS'),
meta=dict(type='dict', default={}),
name=dict(required=True),
port=dict(type='int', default=80),
protocol=dict(choices=CLB_PROTOCOLS, default='HTTP'),
state=dict(default='present', choices=['present', 'absent']),
timeout=dict(type='int', default=30),
type=dict(choices=['PUBLIC', 'SERVICENET'], default='PUBLIC'),
vip_id=dict(),
wait=dict(type='bool', default=False),
wait_timeout=dict(type='int', default=300),
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together(),
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
algorithm = module.params.get('algorithm')
meta = module.params.get('meta')
name = module.params.get('name')
port = module.params.get('port')
protocol = module.params.get('protocol')
state = module.params.get('state')
timeout = int(module.params.get('timeout'))
vip_id = module.params.get('vip_id')
vip_type = module.params.get('type')
wait = module.params.get('wait')
wait_timeout = int(module.params.get('wait_timeout'))
setup_rax_module(module, pyrax)
cloud_load_balancer(module, state, name, meta, algorithm, port, protocol,
vip_type, timeout, wait, wait_timeout, vip_id)
if __name__ == '__main__':
main()

View file

@ -1,291 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: rax_clb_nodes
short_description: Add, modify and remove nodes from a Rackspace Cloud Load Balancer
description:
- Adds, modifies and removes nodes from a Rackspace Cloud Load Balancer
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
address:
type: str
required: false
description:
- IP address or domain name of the node
condition:
type: str
required: false
choices:
- enabled
- disabled
- draining
description:
- Condition for the node, which determines its role within the load
balancer
load_balancer_id:
type: int
required: true
description:
- Load balancer id
node_id:
type: int
required: false
description:
- Node id
port:
type: int
required: false
description:
- Port number of the load balanced service on the node
state:
type: str
required: false
default: "present"
choices:
- present
- absent
description:
- Indicate desired state of the node
type:
type: str
required: false
choices:
- primary
- secondary
description:
- Type of node
wait:
required: false
default: false
type: bool
description:
- Wait for the load balancer to become active before returning
wait_timeout:
type: int
required: false
default: 30
description:
- How long to wait before giving up and returning an error
weight:
type: int
required: false
description:
- Weight of node
virtualenv:
type: path
description:
- Virtualenv to execute this module in
author: "Lukasz Kawczynski (@neuroid)"
extends_documentation_fragment:
- community.general.rackspace
- community.general.rackspace.openstack
- community.general.attributes
'''
EXAMPLES = '''
- name: Add a new node to the load balancer
local_action:
module: rax_clb_nodes
load_balancer_id: 71
address: 10.2.2.3
port: 80
condition: enabled
type: primary
wait: true
credentials: /path/to/credentials
- name: Drain connections from a node
local_action:
module: rax_clb_nodes
load_balancer_id: 71
node_id: 410
condition: draining
wait: true
credentials: /path/to/credentials
- name: Remove a node from the load balancer
local_action:
module: rax_clb_nodes
load_balancer_id: 71
node_id: 410
state: absent
wait: true
credentials: /path/to/credentials
'''
import os
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.rax import rax_argument_spec, rax_clb_node_to_dict, rax_required_together, setup_rax_module
def _activate_virtualenv(path):
activate_this = os.path.join(path, 'bin', 'activate_this.py')
with open(activate_this) as f:
code = compile(f.read(), activate_this, 'exec')
exec(code)
def _get_node(lb, node_id=None, address=None, port=None):
"""Return a matching node"""
for node in getattr(lb, 'nodes', []):
match_list = []
if node_id is not None:
match_list.append(getattr(node, 'id', None) == node_id)
if address is not None:
match_list.append(getattr(node, 'address', None) == address)
if port is not None:
match_list.append(getattr(node, 'port', None) == port)
if match_list and all(match_list):
return node
return None
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
address=dict(),
condition=dict(choices=['enabled', 'disabled', 'draining']),
load_balancer_id=dict(required=True, type='int'),
node_id=dict(type='int'),
port=dict(type='int'),
state=dict(default='present', choices=['present', 'absent']),
type=dict(choices=['primary', 'secondary']),
virtualenv=dict(type='path'),
wait=dict(default=False, type='bool'),
wait_timeout=dict(default=30, type='int'),
weight=dict(type='int'),
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together(),
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
address = module.params['address']
condition = (module.params['condition'] and
module.params['condition'].upper())
load_balancer_id = module.params['load_balancer_id']
node_id = module.params['node_id']
port = module.params['port']
state = module.params['state']
typ = module.params['type'] and module.params['type'].upper()
virtualenv = module.params['virtualenv']
wait = module.params['wait']
wait_timeout = module.params['wait_timeout'] or 1
weight = module.params['weight']
if virtualenv:
try:
_activate_virtualenv(virtualenv)
except IOError as e:
module.fail_json(msg='Failed to activate virtualenv %s (%s)' % (
virtualenv, e))
setup_rax_module(module, pyrax)
if not pyrax.cloud_loadbalancers:
module.fail_json(msg='Failed to instantiate client. This '
'typically indicates an invalid region or an '
'incorrectly capitalized region name.')
try:
lb = pyrax.cloud_loadbalancers.get(load_balancer_id)
except pyrax.exc.PyraxException as e:
module.fail_json(msg='%s' % e.message)
node = _get_node(lb, node_id, address, port)
result = rax_clb_node_to_dict(node)
if state == 'absent':
if not node: # Removing a non-existent node
module.exit_json(changed=False, state=state)
try:
lb.delete_node(node)
result = {}
except pyrax.exc.NotFound:
module.exit_json(changed=False, state=state)
except pyrax.exc.PyraxException as e:
module.fail_json(msg='%s' % e.message)
else: # present
if not node:
if node_id: # Updating a non-existent node
msg = 'Node %d not found' % node_id
if lb.nodes:
msg += (' (available nodes: %s)' %
', '.join([str(x.id) for x in lb.nodes]))
module.fail_json(msg=msg)
else: # Creating a new node
try:
node = pyrax.cloudloadbalancers.Node(
address=address, port=port, condition=condition,
weight=weight, type=typ)
resp, body = lb.add_nodes([node])
result.update(body['nodes'][0])
except pyrax.exc.PyraxException as e:
module.fail_json(msg='%s' % e.message)
else: # Updating an existing node
mutable = {
'condition': condition,
'type': typ,
'weight': weight,
}
for name in list(mutable):
value = mutable[name]
if value is None or value == getattr(node, name):
mutable.pop(name)
if not mutable:
module.exit_json(changed=False, state=state, node=result)
try:
# The diff has to be set explicitly to update node's weight and
# type; this should probably be fixed in pyrax
lb.update_node(node, diff=mutable)
result.update(mutable)
except pyrax.exc.PyraxException as e:
module.fail_json(msg='%s' % e.message)
if wait:
pyrax.utils.wait_until(lb, "status", "ACTIVE", interval=1,
attempts=wait_timeout)
if lb.status != 'ACTIVE':
module.fail_json(
msg='Load balancer not active after %ds (current status: %s)' %
(wait_timeout, lb.status.lower()))
kwargs = {'node': result} if result else {}
module.exit_json(changed=True, state=state, **kwargs)
if __name__ == '__main__':
main()

View file

@ -1,289 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
module: rax_clb_ssl
short_description: Manage SSL termination for a Rackspace Cloud Load Balancer
description:
- Set up, reconfigure, or remove SSL termination for an existing load balancer.
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
loadbalancer:
type: str
description:
- Name or ID of the load balancer on which to manage SSL termination.
required: true
state:
type: str
description:
- If set to "present", SSL termination will be added to this load balancer.
- If "absent", SSL termination will be removed instead.
choices:
- present
- absent
default: present
enabled:
description:
- If set to "false", temporarily disable SSL termination without discarding
- existing credentials.
default: true
type: bool
private_key:
type: str
description:
- The private SSL key as a string in PEM format.
certificate:
type: str
description:
- The public SSL certificates as a string in PEM format.
intermediate_certificate:
type: str
description:
- One or more intermediate certificate authorities as a string in PEM
- format, concatenated into a single string.
secure_port:
type: int
description:
- The port to listen for secure traffic.
default: 443
secure_traffic_only:
description:
- If "true", the load balancer will *only* accept secure traffic.
default: false
type: bool
https_redirect:
description:
- If "true", the load balancer will redirect HTTP traffic to HTTPS.
- Requires "secure_traffic_only" to be true. Incurs an implicit wait if SSL
- termination is also applied or removed.
type: bool
wait:
description:
- Wait for the balancer to be in state "running" before turning.
default: false
type: bool
wait_timeout:
type: int
description:
- How long before "wait" gives up, in seconds.
default: 300
author: Ash Wilson (@smashwilson)
extends_documentation_fragment:
- community.general.rackspace
- community.general.rackspace.openstack
- community.general.attributes
'''
EXAMPLES = '''
- name: Enable SSL termination on a load balancer
community.general.rax_clb_ssl:
loadbalancer: the_loadbalancer
state: present
private_key: "{{ lookup('file', 'credentials/server.key' ) }}"
certificate: "{{ lookup('file', 'credentials/server.crt' ) }}"
intermediate_certificate: "{{ lookup('file', 'credentials/trust-chain.crt') }}"
secure_traffic_only: true
wait: true
- name: Disable SSL termination
community.general.rax_clb_ssl:
loadbalancer: "{{ registered_lb.balancer.id }}"
state: absent
wait: true
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.rax import (rax_argument_spec,
rax_find_loadbalancer,
rax_required_together,
rax_to_dict,
setup_rax_module,
)
def cloud_load_balancer_ssl(module, loadbalancer, state, enabled, private_key,
certificate, intermediate_certificate, secure_port,
secure_traffic_only, https_redirect,
wait, wait_timeout):
# Validate arguments.
if state == 'present':
if not private_key:
module.fail_json(msg="private_key must be provided.")
else:
private_key = private_key.strip()
if not certificate:
module.fail_json(msg="certificate must be provided.")
else:
certificate = certificate.strip()
attempts = wait_timeout // 5
# Locate the load balancer.
balancer = rax_find_loadbalancer(module, pyrax, loadbalancer)
existing_ssl = balancer.get_ssl_termination()
changed = False
if state == 'present':
# Apply or reconfigure SSL termination on the load balancer.
ssl_attrs = dict(
securePort=secure_port,
privatekey=private_key,
certificate=certificate,
intermediateCertificate=intermediate_certificate,
enabled=enabled,
secureTrafficOnly=secure_traffic_only
)
needs_change = False
if existing_ssl:
for ssl_attr, value in ssl_attrs.items():
if ssl_attr == 'privatekey':
# The private key is not included in get_ssl_termination's
# output (as it shouldn't be). Also, if you're changing the
# private key, you'll also be changing the certificate,
# so we don't lose anything by not checking it.
continue
if value is not None and existing_ssl.get(ssl_attr) != value:
# module.fail_json(msg='Unnecessary change', attr=ssl_attr, value=value, existing=existing_ssl.get(ssl_attr))
needs_change = True
else:
needs_change = True
if needs_change:
try:
balancer.add_ssl_termination(**ssl_attrs)
except pyrax.exceptions.PyraxException as e:
module.fail_json(msg='%s' % e.message)
changed = True
elif state == 'absent':
# Remove SSL termination if it's already configured.
if existing_ssl:
try:
balancer.delete_ssl_termination()
except pyrax.exceptions.PyraxException as e:
module.fail_json(msg='%s' % e.message)
changed = True
if https_redirect is not None and balancer.httpsRedirect != https_redirect:
if changed:
# This wait is unavoidable because load balancers are immutable
# while the SSL termination changes above are being applied.
pyrax.utils.wait_for_build(balancer, interval=5, attempts=attempts)
try:
balancer.update(httpsRedirect=https_redirect)
except pyrax.exceptions.PyraxException as e:
module.fail_json(msg='%s' % e.message)
changed = True
if changed and wait:
pyrax.utils.wait_for_build(balancer, interval=5, attempts=attempts)
balancer.get()
new_ssl_termination = balancer.get_ssl_termination()
# Intentionally omit the private key from the module output, so you don't
# accidentally echo it with `ansible-playbook -v` or `debug`, and the
# certificate, which is just long. Convert other attributes to snake_case
# and include https_redirect at the top-level.
if new_ssl_termination:
new_ssl = dict(
enabled=new_ssl_termination['enabled'],
secure_port=new_ssl_termination['securePort'],
secure_traffic_only=new_ssl_termination['secureTrafficOnly']
)
else:
new_ssl = None
result = dict(
changed=changed,
https_redirect=balancer.httpsRedirect,
ssl_termination=new_ssl,
balancer=rax_to_dict(balancer, 'clb')
)
success = True
if balancer.status == 'ERROR':
result['msg'] = '%s failed to build' % balancer.id
success = False
elif wait and balancer.status not in ('ACTIVE', 'ERROR'):
result['msg'] = 'Timeout waiting on %s' % balancer.id
success = False
if success:
module.exit_json(**result)
else:
module.fail_json(**result)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(dict(
loadbalancer=dict(required=True),
state=dict(default='present', choices=['present', 'absent']),
enabled=dict(type='bool', default=True),
private_key=dict(no_log=True),
certificate=dict(),
intermediate_certificate=dict(),
secure_port=dict(type='int', default=443),
secure_traffic_only=dict(type='bool', default=False),
https_redirect=dict(type='bool'),
wait=dict(type='bool', default=False),
wait_timeout=dict(type='int', default=300)
))
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together(),
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module.')
loadbalancer = module.params.get('loadbalancer')
state = module.params.get('state')
enabled = module.boolean(module.params.get('enabled'))
private_key = module.params.get('private_key')
certificate = module.params.get('certificate')
intermediate_certificate = module.params.get('intermediate_certificate')
secure_port = module.params.get('secure_port')
secure_traffic_only = module.boolean(module.params.get('secure_traffic_only'))
https_redirect = module.boolean(module.params.get('https_redirect'))
wait = module.boolean(module.params.get('wait'))
wait_timeout = module.params.get('wait_timeout')
setup_rax_module(module, pyrax)
cloud_load_balancer_ssl(
module, loadbalancer, state, enabled, private_key, certificate,
intermediate_certificate, secure_port, secure_traffic_only,
https_redirect, wait, wait_timeout
)
if __name__ == '__main__':
main()

View file

@ -1,180 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: rax_dns
short_description: Manage domains on Rackspace Cloud DNS
description:
- Manage domains on Rackspace Cloud DNS
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
comment:
type: str
description:
- Brief description of the domain. Maximum length of 160 characters
email:
type: str
description:
- Email address of the domain administrator
name:
type: str
description:
- Domain name to create
state:
type: str
description:
- Indicate desired state of the resource
choices:
- present
- absent
default: present
ttl:
type: int
description:
- Time to live of domain in seconds
default: 3600
notes:
- "It is recommended that plays utilizing this module be run with
C(serial: 1) to avoid exceeding the API request limit imposed by
the Rackspace CloudDNS API"
author: "Matt Martz (@sivel)"
extends_documentation_fragment:
- community.general.rackspace
- community.general.rackspace.openstack
- community.general.attributes
'''
EXAMPLES = '''
- name: Create domain
hosts: all
gather_facts: false
tasks:
- name: Domain create request
local_action:
module: rax_dns
credentials: ~/.raxpub
name: example.org
email: admin@example.org
register: rax_dns
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.rax import (rax_argument_spec,
rax_required_together,
rax_to_dict,
setup_rax_module,
)
def rax_dns(module, comment, email, name, state, ttl):
changed = False
dns = pyrax.cloud_dns
if not dns:
module.fail_json(msg='Failed to instantiate client. This '
'typically indicates an invalid region or an '
'incorrectly capitalized region name.')
if state == 'present':
if not email:
module.fail_json(msg='An "email" attribute is required for '
'creating a domain')
try:
domain = dns.find(name=name)
except pyrax.exceptions.NoUniqueMatch as e:
module.fail_json(msg='%s' % e.message)
except pyrax.exceptions.NotFound:
try:
domain = dns.create(name=name, emailAddress=email, ttl=ttl,
comment=comment)
changed = True
except Exception as e:
module.fail_json(msg='%s' % e.message)
update = {}
if comment != getattr(domain, 'comment', None):
update['comment'] = comment
if ttl != getattr(domain, 'ttl', None):
update['ttl'] = ttl
if email != getattr(domain, 'emailAddress', None):
update['emailAddress'] = email
if update:
try:
domain.update(**update)
changed = True
domain.get()
except Exception as e:
module.fail_json(msg='%s' % e.message)
elif state == 'absent':
try:
domain = dns.find(name=name)
except pyrax.exceptions.NotFound:
domain = {}
except Exception as e:
module.fail_json(msg='%s' % e.message)
if domain:
try:
domain.delete()
changed = True
except Exception as e:
module.fail_json(msg='%s' % e.message)
module.exit_json(changed=changed, domain=rax_to_dict(domain))
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
comment=dict(),
email=dict(),
name=dict(),
state=dict(default='present', choices=['present', 'absent']),
ttl=dict(type='int', default=3600),
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together(),
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
comment = module.params.get('comment')
email = module.params.get('email')
name = module.params.get('name')
state = module.params.get('state')
ttl = module.params.get('ttl')
setup_rax_module(module, pyrax, False)
rax_dns(module, comment, email, name, state, ttl)
if __name__ == '__main__':
main()

View file

@ -1,358 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: rax_dns_record
short_description: Manage DNS records on Rackspace Cloud DNS
description:
- Manage DNS records on Rackspace Cloud DNS
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
comment:
type: str
description:
- Brief description of the domain. Maximum length of 160 characters
data:
type: str
description:
- IP address for A/AAAA record, FQDN for CNAME/MX/NS, or text data for
SRV/TXT
required: true
domain:
type: str
description:
- Domain name to create the record in. This is an invalid option when
type=PTR
loadbalancer:
type: str
description:
- Load Balancer ID to create a PTR record for. Only used with type=PTR
name:
type: str
description:
- FQDN record name to create
required: true
overwrite:
description:
- Add new records if data doesn't match, instead of updating existing
record with matching name. If there are already multiple records with
matching name and overwrite=true, this module will fail.
default: true
type: bool
priority:
type: int
description:
- Required for MX and SRV records, but forbidden for other record types.
If specified, must be an integer from 0 to 65535.
server:
type: str
description:
- Server ID to create a PTR record for. Only used with type=PTR
state:
type: str
description:
- Indicate desired state of the resource
choices:
- present
- absent
default: present
ttl:
type: int
description:
- Time to live of record in seconds
default: 3600
type:
type: str
description:
- DNS record type
choices:
- A
- AAAA
- CNAME
- MX
- NS
- SRV
- TXT
- PTR
required: true
notes:
- "It is recommended that plays utilizing this module be run with
C(serial: 1) to avoid exceeding the API request limit imposed by
the Rackspace CloudDNS API."
- To manipulate a C(PTR) record either C(loadbalancer) or C(server) must be
supplied.
author: "Matt Martz (@sivel)"
extends_documentation_fragment:
- community.general.rackspace
- community.general.rackspace.openstack
- community.general.attributes
'''
EXAMPLES = '''
- name: Create DNS Records
hosts: all
gather_facts: false
tasks:
- name: Create A record
local_action:
module: rax_dns_record
credentials: ~/.raxpub
domain: example.org
name: www.example.org
data: "{{ rax_accessipv4 }}"
type: A
register: a_record
- name: Create PTR record
local_action:
module: rax_dns_record
credentials: ~/.raxpub
server: "{{ rax_id }}"
name: "{{ inventory_hostname }}"
region: DFW
register: ptr_record
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.rax import (rax_argument_spec,
rax_find_loadbalancer,
rax_find_server,
rax_required_together,
rax_to_dict,
setup_rax_module,
)
def rax_dns_record_ptr(module, data=None, comment=None, loadbalancer=None,
name=None, server=None, state='present', ttl=7200):
changed = False
results = []
dns = pyrax.cloud_dns
if not dns:
module.fail_json(msg='Failed to instantiate client. This '
'typically indicates an invalid region or an '
'incorrectly capitalized region name.')
if loadbalancer:
item = rax_find_loadbalancer(module, pyrax, loadbalancer)
elif server:
item = rax_find_server(module, pyrax, server)
if state == 'present':
current = dns.list_ptr_records(item)
for record in current:
if record.data == data:
if record.ttl != ttl or record.name != name:
try:
dns.update_ptr_record(item, record, name, data, ttl)
changed = True
except Exception as e:
module.fail_json(msg='%s' % e.message)
record.ttl = ttl
record.name = name
results.append(rax_to_dict(record))
break
else:
results.append(rax_to_dict(record))
break
if not results:
record = dict(name=name, type='PTR', data=data, ttl=ttl,
comment=comment)
try:
results = dns.add_ptr_records(item, [record])
changed = True
except Exception as e:
module.fail_json(msg='%s' % e.message)
module.exit_json(changed=changed, records=results)
elif state == 'absent':
current = dns.list_ptr_records(item)
for record in current:
if record.data == data:
results.append(rax_to_dict(record))
break
if results:
try:
dns.delete_ptr_records(item, data)
changed = True
except Exception as e:
module.fail_json(msg='%s' % e.message)
module.exit_json(changed=changed, records=results)
def rax_dns_record(module, comment=None, data=None, domain=None, name=None,
overwrite=True, priority=None, record_type='A',
state='present', ttl=7200):
"""Function for manipulating record types other than PTR"""
changed = False
dns = pyrax.cloud_dns
if not dns:
module.fail_json(msg='Failed to instantiate client. This '
'typically indicates an invalid region or an '
'incorrectly capitalized region name.')
if state == 'present':
if not priority and record_type in ['MX', 'SRV']:
module.fail_json(msg='A "priority" attribute is required for '
'creating a MX or SRV record')
try:
domain = dns.find(name=domain)
except Exception as e:
module.fail_json(msg='%s' % e.message)
try:
if overwrite:
record = domain.find_record(record_type, name=name)
else:
record = domain.find_record(record_type, name=name, data=data)
except pyrax.exceptions.DomainRecordNotUnique as e:
module.fail_json(msg='overwrite=true and there are multiple matching records')
except pyrax.exceptions.DomainRecordNotFound as e:
try:
record_data = {
'type': record_type,
'name': name,
'data': data,
'ttl': ttl
}
if comment:
record_data.update(dict(comment=comment))
if priority and record_type.upper() in ['MX', 'SRV']:
record_data.update(dict(priority=priority))
record = domain.add_records([record_data])[0]
changed = True
except Exception as e:
module.fail_json(msg='%s' % e.message)
update = {}
if comment != getattr(record, 'comment', None):
update['comment'] = comment
if ttl != getattr(record, 'ttl', None):
update['ttl'] = ttl
if priority != getattr(record, 'priority', None):
update['priority'] = priority
if data != getattr(record, 'data', None):
update['data'] = data
if update:
try:
record.update(**update)
changed = True
record.get()
except Exception as e:
module.fail_json(msg='%s' % e.message)
elif state == 'absent':
try:
domain = dns.find(name=domain)
except Exception as e:
module.fail_json(msg='%s' % e.message)
try:
record = domain.find_record(record_type, name=name, data=data)
except pyrax.exceptions.DomainRecordNotFound as e:
record = {}
except pyrax.exceptions.DomainRecordNotUnique as e:
module.fail_json(msg='%s' % e.message)
if record:
try:
record.delete()
changed = True
except Exception as e:
module.fail_json(msg='%s' % e.message)
module.exit_json(changed=changed, record=rax_to_dict(record))
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
comment=dict(),
data=dict(required=True),
domain=dict(),
loadbalancer=dict(),
name=dict(required=True),
overwrite=dict(type='bool', default=True),
priority=dict(type='int'),
server=dict(),
state=dict(default='present', choices=['present', 'absent']),
ttl=dict(type='int', default=3600),
type=dict(required=True, choices=['A', 'AAAA', 'CNAME', 'MX', 'NS',
'SRV', 'TXT', 'PTR'])
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together(),
mutually_exclusive=[
['server', 'loadbalancer', 'domain'],
],
required_one_of=[
['server', 'loadbalancer', 'domain'],
],
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
comment = module.params.get('comment')
data = module.params.get('data')
domain = module.params.get('domain')
loadbalancer = module.params.get('loadbalancer')
name = module.params.get('name')
overwrite = module.params.get('overwrite')
priority = module.params.get('priority')
server = module.params.get('server')
state = module.params.get('state')
ttl = module.params.get('ttl')
record_type = module.params.get('type')
setup_rax_module(module, pyrax, False)
if record_type.upper() == 'PTR':
if not server and not loadbalancer:
module.fail_json(msg='one of the following is required: '
'server,loadbalancer')
rax_dns_record_ptr(module, data=data, comment=comment,
loadbalancer=loadbalancer, name=name, server=server,
state=state, ttl=ttl)
else:
rax_dns_record(module, comment=comment, data=data, domain=domain,
name=name, overwrite=overwrite, priority=priority,
record_type=record_type, state=state, ttl=ttl)
if __name__ == '__main__':
main()

View file

@ -1,152 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: rax_facts
short_description: Gather facts for Rackspace Cloud Servers
description:
- Gather facts for Rackspace Cloud Servers.
attributes:
check_mode:
version_added: 3.3.0
# This was backported to 2.5.4 and 1.3.11 as well, since this was a bugfix
options:
address:
type: str
description:
- Server IP address to retrieve facts for, will match any IP assigned to
the server
id:
type: str
description:
- Server ID to retrieve facts for
name:
type: str
description:
- Server name to retrieve facts for
author: "Matt Martz (@sivel)"
extends_documentation_fragment:
- community.general.rackspace.openstack
- community.general.attributes
- community.general.attributes.facts
- community.general.attributes.facts_module
'''
EXAMPLES = '''
- name: Gather info about servers
hosts: all
gather_facts: false
tasks:
- name: Get facts about servers
local_action:
module: rax_facts
credentials: ~/.raxpub
name: "{{ inventory_hostname }}"
region: DFW
- name: Map some facts
ansible.builtin.set_fact:
ansible_ssh_host: "{{ rax_accessipv4 }}"
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.rax import (rax_argument_spec,
rax_required_together,
rax_to_dict,
setup_rax_module,
)
def rax_facts(module, address, name, server_id):
changed = False
cs = pyrax.cloudservers
if cs is None:
module.fail_json(msg='Failed to instantiate client. This '
'typically indicates an invalid region or an '
'incorrectly capitalized region name.')
ansible_facts = {}
search_opts = {}
if name:
search_opts = dict(name='^%s$' % name)
try:
servers = cs.servers.list(search_opts=search_opts)
except Exception as e:
module.fail_json(msg='%s' % e.message)
elif address:
servers = []
try:
for server in cs.servers.list():
for addresses in server.networks.values():
if address in addresses:
servers.append(server)
break
except Exception as e:
module.fail_json(msg='%s' % e.message)
elif server_id:
servers = []
try:
servers.append(cs.servers.get(server_id))
except Exception as e:
pass
servers[:] = [server for server in servers if server.status != "DELETED"]
if len(servers) > 1:
module.fail_json(msg='Multiple servers found matching provided '
'search parameters')
elif len(servers) == 1:
ansible_facts = rax_to_dict(servers[0], 'server')
module.exit_json(changed=changed, ansible_facts=ansible_facts)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
address=dict(),
id=dict(),
name=dict(),
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together(),
mutually_exclusive=[['address', 'id', 'name']],
required_one_of=[['address', 'id', 'name']],
supports_check_mode=True,
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
address = module.params.get('address')
server_id = module.params.get('id')
name = module.params.get('name')
setup_rax_module(module, pyrax)
rax_facts(module, address, name, server_id)
if __name__ == '__main__':
main()

View file

@ -1,400 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2013, Paul Durivage <paul.durivage@rackspace.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: rax_files
short_description: Manipulate Rackspace Cloud Files Containers
description:
- Manipulate Rackspace Cloud Files Containers
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
clear_meta:
description:
- Optionally clear existing metadata when applying metadata to existing containers.
Selecting this option is only appropriate when setting type=meta
type: bool
default: false
container:
type: str
description:
- The container to use for container or metadata operations.
meta:
type: dict
default: {}
description:
- A hash of items to set as metadata values on a container
private:
description:
- Used to set a container as private, removing it from the CDN. B(Warning!)
Private containers, if previously made public, can have live objects
available until the TTL on cached objects expires
type: bool
default: false
public:
description:
- Used to set a container as public, available via the Cloud Files CDN
type: bool
default: false
region:
type: str
description:
- Region to create an instance in
state:
type: str
description:
- Indicate desired state of the resource
choices: ['present', 'absent', 'list']
default: present
ttl:
type: int
description:
- In seconds, set a container-wide TTL for all objects cached on CDN edge nodes.
Setting a TTL is only appropriate for containers that are public
type:
type: str
description:
- Type of object to do work on, i.e. metadata object or a container object
choices:
- container
- meta
default: container
web_error:
type: str
description:
- Sets an object to be presented as the HTTP error page when accessed by the CDN URL
web_index:
type: str
description:
- Sets an object to be presented as the HTTP index page when accessed by the CDN URL
author: "Paul Durivage (@angstwad)"
extends_documentation_fragment:
- community.general.rackspace
- community.general.rackspace.openstack
- community.general.attributes
'''
EXAMPLES = '''
- name: "Test Cloud Files Containers"
hosts: local
gather_facts: false
tasks:
- name: "List all containers"
community.general.rax_files:
state: list
- name: "Create container called 'mycontainer'"
community.general.rax_files:
container: mycontainer
- name: "Create container 'mycontainer2' with metadata"
community.general.rax_files:
container: mycontainer2
meta:
key: value
file_for: someuser@example.com
- name: "Set a container's web index page"
community.general.rax_files:
container: mycontainer
web_index: index.html
- name: "Set a container's web error page"
community.general.rax_files:
container: mycontainer
web_error: error.html
- name: "Make container public"
community.general.rax_files:
container: mycontainer
public: true
- name: "Make container public with a 24 hour TTL"
community.general.rax_files:
container: mycontainer
public: true
ttl: 86400
- name: "Make container private"
community.general.rax_files:
container: mycontainer
private: true
- name: "Test Cloud Files Containers Metadata Storage"
hosts: local
gather_facts: false
tasks:
- name: "Get mycontainer2 metadata"
community.general.rax_files:
container: mycontainer2
type: meta
- name: "Set mycontainer2 metadata"
community.general.rax_files:
container: mycontainer2
type: meta
meta:
uploaded_by: someuser@example.com
- name: "Remove mycontainer2 metadata"
community.general.rax_files:
container: "mycontainer2"
type: meta
state: absent
meta:
key: ""
file_for: ""
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError as e:
HAS_PYRAX = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.rax import rax_argument_spec, rax_required_together, setup_rax_module
EXIT_DICT = dict(success=True)
META_PREFIX = 'x-container-meta-'
def _get_container(module, cf, container):
try:
return cf.get_container(container)
except pyrax.exc.NoSuchContainer as e:
module.fail_json(msg=e.message)
def _fetch_meta(module, container):
EXIT_DICT['meta'] = dict()
try:
for k, v in container.get_metadata().items():
split_key = k.split(META_PREFIX)[-1]
EXIT_DICT['meta'][split_key] = v
except Exception as e:
module.fail_json(msg=e.message)
def meta(cf, module, container_, state, meta_, clear_meta):
c = _get_container(module, cf, container_)
if meta_ and state == 'present':
try:
meta_set = c.set_metadata(meta_, clear=clear_meta)
except Exception as e:
module.fail_json(msg=e.message)
elif meta_ and state == 'absent':
remove_results = []
for k, v in meta_.items():
c.remove_metadata_key(k)
remove_results.append(k)
EXIT_DICT['deleted_meta_keys'] = remove_results
elif state == 'absent':
remove_results = []
for k, v in c.get_metadata().items():
c.remove_metadata_key(k)
remove_results.append(k)
EXIT_DICT['deleted_meta_keys'] = remove_results
_fetch_meta(module, c)
_locals = locals().keys()
EXIT_DICT['container'] = c.name
if 'meta_set' in _locals or 'remove_results' in _locals:
EXIT_DICT['changed'] = True
module.exit_json(**EXIT_DICT)
def container(cf, module, container_, state, meta_, clear_meta, ttl, public,
private, web_index, web_error):
if public and private:
module.fail_json(msg='container cannot be simultaneously '
'set to public and private')
if state == 'absent' and (meta_ or clear_meta or public or private or web_index or web_error):
module.fail_json(msg='state cannot be omitted when setting/removing '
'attributes on a container')
if state == 'list':
# We don't care if attributes are specified, let's list containers
EXIT_DICT['containers'] = cf.list_containers()
module.exit_json(**EXIT_DICT)
try:
c = cf.get_container(container_)
except pyrax.exc.NoSuchContainer as e:
# Make the container if state=present, otherwise bomb out
if state == 'present':
try:
c = cf.create_container(container_)
except Exception as e:
module.fail_json(msg=e.message)
else:
EXIT_DICT['changed'] = True
EXIT_DICT['created'] = True
else:
module.fail_json(msg=e.message)
else:
# Successfully grabbed a container object
# Delete if state is absent
if state == 'absent':
try:
cont_deleted = c.delete()
except Exception as e:
module.fail_json(msg=e.message)
else:
EXIT_DICT['deleted'] = True
if meta_:
try:
meta_set = c.set_metadata(meta_, clear=clear_meta)
except Exception as e:
module.fail_json(msg=e.message)
finally:
_fetch_meta(module, c)
if ttl:
try:
c.cdn_ttl = ttl
except Exception as e:
module.fail_json(msg=e.message)
else:
EXIT_DICT['ttl'] = c.cdn_ttl
if public:
try:
cont_public = c.make_public()
except Exception as e:
module.fail_json(msg=e.message)
else:
EXIT_DICT['container_urls'] = dict(url=c.cdn_uri,
ssl_url=c.cdn_ssl_uri,
streaming_url=c.cdn_streaming_uri,
ios_uri=c.cdn_ios_uri)
if private:
try:
cont_private = c.make_private()
except Exception as e:
module.fail_json(msg=e.message)
else:
EXIT_DICT['set_private'] = True
if web_index:
try:
cont_web_index = c.set_web_index_page(web_index)
except Exception as e:
module.fail_json(msg=e.message)
else:
EXIT_DICT['set_index'] = True
finally:
_fetch_meta(module, c)
if web_error:
try:
cont_err_index = c.set_web_error_page(web_error)
except Exception as e:
module.fail_json(msg=e.message)
else:
EXIT_DICT['set_error'] = True
finally:
_fetch_meta(module, c)
EXIT_DICT['container'] = c.name
EXIT_DICT['objs_in_container'] = c.object_count
EXIT_DICT['total_bytes'] = c.total_bytes
_locals = locals().keys()
if ('cont_deleted' in _locals
or 'meta_set' in _locals
or 'cont_public' in _locals
or 'cont_private' in _locals
or 'cont_web_index' in _locals
or 'cont_err_index' in _locals):
EXIT_DICT['changed'] = True
module.exit_json(**EXIT_DICT)
def cloudfiles(module, container_, state, meta_, clear_meta, typ, ttl, public,
private, web_index, web_error):
""" Dispatch from here to work with metadata or file objects """
cf = pyrax.cloudfiles
if cf is None:
module.fail_json(msg='Failed to instantiate client. This '
'typically indicates an invalid region or an '
'incorrectly capitalized region name.')
if typ == "container":
container(cf, module, container_, state, meta_, clear_meta, ttl,
public, private, web_index, web_error)
else:
meta(cf, module, container_, state, meta_, clear_meta)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
container=dict(),
state=dict(choices=['present', 'absent', 'list'],
default='present'),
meta=dict(type='dict', default=dict()),
clear_meta=dict(default=False, type='bool'),
type=dict(choices=['container', 'meta'], default='container'),
ttl=dict(type='int'),
public=dict(default=False, type='bool'),
private=dict(default=False, type='bool'),
web_index=dict(),
web_error=dict()
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together()
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
container_ = module.params.get('container')
state = module.params.get('state')
meta_ = module.params.get('meta')
clear_meta = module.params.get('clear_meta')
typ = module.params.get('type')
ttl = module.params.get('ttl')
public = module.params.get('public')
private = module.params.get('private')
web_index = module.params.get('web_index')
web_error = module.params.get('web_error')
if state in ['present', 'absent'] and not container_:
module.fail_json(msg='please specify a container name')
if clear_meta and not typ == 'meta':
module.fail_json(msg='clear_meta can only be used when setting '
'metadata')
setup_rax_module(module, pyrax)
cloudfiles(module, container_, state, meta_, clear_meta, typ, ttl, public,
private, web_index, web_error)
if __name__ == '__main__':
main()

View file

@ -1,556 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2013, Paul Durivage <paul.durivage@rackspace.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: rax_files_objects
short_description: Upload, download, and delete objects in Rackspace Cloud Files
description:
- Upload, download, and delete objects in Rackspace Cloud Files.
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
clear_meta:
description:
- Optionally clear existing metadata when applying metadata to existing objects.
Selecting this option is only appropriate when setting O(type=meta).
type: bool
default: false
container:
type: str
description:
- The container to use for file object operations.
required: true
dest:
type: str
description:
- The destination of a C(get) operation; i.e. a local directory, C(/home/user/myfolder).
Used to specify the destination of an operation on a remote object; i.e. a file name,
V(file1), or a comma-separated list of remote objects, V(file1,file2,file17).
expires:
type: int
description:
- Used to set an expiration in seconds on an uploaded file or folder.
meta:
type: dict
default: {}
description:
- Items to set as metadata values on an uploaded file or folder.
method:
type: str
description:
- >
The method of operation to be performed: V(put) to upload files, V(get) to download files or
V(delete) to remove remote objects in Cloud Files.
choices:
- get
- put
- delete
default: get
src:
type: str
description:
- Source from which to upload files. Used to specify a remote object as a source for
an operation, i.e. a file name, V(file1), or a comma-separated list of remote objects,
V(file1,file2,file17). Parameters O(src) and O(dest) are mutually exclusive on remote-only object operations
structure:
description:
- Used to specify whether to maintain nested directory structure when downloading objects
from Cloud Files. Setting to false downloads the contents of a container to a single,
flat directory
type: bool
default: true
type:
type: str
description:
- Type of object to do work on
- Metadata object or a file object
choices:
- file
- meta
default: file
author: "Paul Durivage (@angstwad)"
extends_documentation_fragment:
- community.general.rackspace
- community.general.rackspace.openstack
- community.general.attributes
'''
EXAMPLES = '''
- name: "Test Cloud Files Objects"
hosts: local
gather_facts: false
tasks:
- name: "Get objects from test container"
community.general.rax_files_objects:
container: testcont
dest: ~/Downloads/testcont
- name: "Get single object from test container"
community.general.rax_files_objects:
container: testcont
src: file1
dest: ~/Downloads/testcont
- name: "Get several objects from test container"
community.general.rax_files_objects:
container: testcont
src: file1,file2,file3
dest: ~/Downloads/testcont
- name: "Delete one object in test container"
community.general.rax_files_objects:
container: testcont
method: delete
dest: file1
- name: "Delete several objects in test container"
community.general.rax_files_objects:
container: testcont
method: delete
dest: file2,file3,file4
- name: "Delete all objects in test container"
community.general.rax_files_objects:
container: testcont
method: delete
- name: "Upload all files to test container"
community.general.rax_files_objects:
container: testcont
method: put
src: ~/Downloads/onehundred
- name: "Upload one file to test container"
community.general.rax_files_objects:
container: testcont
method: put
src: ~/Downloads/testcont/file1
- name: "Upload one file to test container with metadata"
community.general.rax_files_objects:
container: testcont
src: ~/Downloads/testcont/file2
method: put
meta:
testkey: testdata
who_uploaded_this: someuser@example.com
- name: "Upload one file to test container with TTL of 60 seconds"
community.general.rax_files_objects:
container: testcont
method: put
src: ~/Downloads/testcont/file3
expires: 60
- name: "Attempt to get remote object that does not exist"
community.general.rax_files_objects:
container: testcont
method: get
src: FileThatDoesNotExist.jpg
dest: ~/Downloads/testcont
ignore_errors: true
- name: "Attempt to delete remote object that does not exist"
community.general.rax_files_objects:
container: testcont
method: delete
dest: FileThatDoesNotExist.jpg
ignore_errors: true
- name: "Test Cloud Files Objects Metadata"
hosts: local
gather_facts: false
tasks:
- name: "Get metadata on one object"
community.general.rax_files_objects:
container: testcont
type: meta
dest: file2
- name: "Get metadata on several objects"
community.general.rax_files_objects:
container: testcont
type: meta
src: file2,file1
- name: "Set metadata on an object"
community.general.rax_files_objects:
container: testcont
type: meta
dest: file17
method: put
meta:
key1: value1
key2: value2
clear_meta: true
- name: "Verify metadata is set"
community.general.rax_files_objects:
container: testcont
type: meta
src: file17
- name: "Delete metadata"
community.general.rax_files_objects:
container: testcont
type: meta
dest: file17
method: delete
meta:
key1: ''
key2: ''
- name: "Get metadata on all objects"
community.general.rax_files_objects:
container: testcont
type: meta
'''
import os
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.rax import rax_argument_spec, rax_required_together, setup_rax_module
EXIT_DICT = dict(success=False)
META_PREFIX = 'x-object-meta-'
def _get_container(module, cf, container):
try:
return cf.get_container(container)
except pyrax.exc.NoSuchContainer as e:
module.fail_json(msg=e.message)
def _upload_folder(cf, folder, container, ttl=None, headers=None):
""" Uploads a folder to Cloud Files.
"""
total_bytes = 0
for root, dummy, files in os.walk(folder):
for fname in files:
full_path = os.path.join(root, fname)
obj_name = os.path.relpath(full_path, folder)
obj_size = os.path.getsize(full_path)
cf.upload_file(container, full_path, obj_name=obj_name, return_none=True, ttl=ttl, headers=headers)
total_bytes += obj_size
return total_bytes
def upload(module, cf, container, src, dest, meta, expires):
""" Uploads a single object or a folder to Cloud Files Optionally sets an
metadata, TTL value (expires), or Content-Disposition and Content-Encoding
headers.
"""
if not src:
module.fail_json(msg='src must be specified when uploading')
c = _get_container(module, cf, container)
src = os.path.abspath(os.path.expanduser(src))
is_dir = os.path.isdir(src)
if not is_dir and not os.path.isfile(src) or not os.path.exists(src):
module.fail_json(msg='src must be a file or a directory')
if dest and is_dir:
module.fail_json(msg='dest cannot be set when whole '
'directories are uploaded')
cont_obj = None
total_bytes = 0
try:
if dest and not is_dir:
cont_obj = c.upload_file(src, obj_name=dest, ttl=expires, headers=meta)
elif is_dir:
total_bytes = _upload_folder(cf, src, c, ttl=expires, headers=meta)
else:
cont_obj = c.upload_file(src, ttl=expires, headers=meta)
except Exception as e:
module.fail_json(msg=e.message)
EXIT_DICT['success'] = True
EXIT_DICT['container'] = c.name
EXIT_DICT['msg'] = "Uploaded %s to container: %s" % (src, c.name)
if cont_obj or total_bytes > 0:
EXIT_DICT['changed'] = True
if meta:
EXIT_DICT['meta'] = dict(updated=True)
if cont_obj:
EXIT_DICT['bytes'] = cont_obj.total_bytes
EXIT_DICT['etag'] = cont_obj.etag
else:
EXIT_DICT['bytes'] = total_bytes
module.exit_json(**EXIT_DICT)
def download(module, cf, container, src, dest, structure):
""" Download objects from Cloud Files to a local path specified by "dest".
Optionally disable maintaining a directory structure by by passing a
false value to "structure".
"""
# Looking for an explicit destination
if not dest:
module.fail_json(msg='dest is a required argument when '
'downloading from Cloud Files')
# Attempt to fetch the container by name
c = _get_container(module, cf, container)
# Accept a single object name or a comma-separated list of objs
# If not specified, get the entire container
if src:
objs = map(str.strip, src.split(','))
else:
objs = c.get_object_names()
dest = os.path.abspath(os.path.expanduser(dest))
is_dir = os.path.isdir(dest)
if not is_dir:
module.fail_json(msg='dest must be a directory')
try:
results = [c.download_object(obj, dest, structure=structure) for obj in objs]
except Exception as e:
module.fail_json(msg=e.message)
len_results = len(results)
len_objs = len(objs)
EXIT_DICT['container'] = c.name
EXIT_DICT['requested_downloaded'] = results
if results:
EXIT_DICT['changed'] = True
if len_results == len_objs:
EXIT_DICT['success'] = True
EXIT_DICT['msg'] = "%s objects downloaded to %s" % (len_results, dest)
else:
EXIT_DICT['msg'] = "Error: only %s of %s objects were " \
"downloaded" % (len_results, len_objs)
module.exit_json(**EXIT_DICT)
def delete(module, cf, container, src, dest):
""" Delete specific objects by proving a single file name or a
comma-separated list to src OR dest (but not both). Omitting file name(s)
assumes the entire container is to be deleted.
"""
if src and dest:
module.fail_json(msg="Error: ambiguous instructions; files to be deleted "
"have been specified on both src and dest args")
c = _get_container(module, cf, container)
objs = dest or src
if objs:
objs = map(str.strip, objs.split(','))
else:
objs = c.get_object_names()
num_objs = len(objs)
try:
results = [c.delete_object(obj) for obj in objs]
except Exception as e:
module.fail_json(msg=e.message)
num_deleted = results.count(True)
EXIT_DICT['container'] = c.name
EXIT_DICT['deleted'] = num_deleted
EXIT_DICT['requested_deleted'] = objs
if num_deleted:
EXIT_DICT['changed'] = True
if num_objs == num_deleted:
EXIT_DICT['success'] = True
EXIT_DICT['msg'] = "%s objects deleted" % num_deleted
else:
EXIT_DICT['msg'] = ("Error: only %s of %s objects "
"deleted" % (num_deleted, num_objs))
module.exit_json(**EXIT_DICT)
def get_meta(module, cf, container, src, dest):
""" Get metadata for a single file, comma-separated list, or entire
container
"""
if src and dest:
module.fail_json(msg="Error: ambiguous instructions; files to be deleted "
"have been specified on both src and dest args")
c = _get_container(module, cf, container)
objs = dest or src
if objs:
objs = map(str.strip, objs.split(','))
else:
objs = c.get_object_names()
try:
results = dict()
for obj in objs:
meta = c.get_object(obj).get_metadata()
results[obj] = dict((k.split(META_PREFIX)[-1], v) for k, v in meta.items())
except Exception as e:
module.fail_json(msg=e.message)
EXIT_DICT['container'] = c.name
if results:
EXIT_DICT['meta_results'] = results
EXIT_DICT['success'] = True
module.exit_json(**EXIT_DICT)
def put_meta(module, cf, container, src, dest, meta, clear_meta):
""" Set metadata on a container, single file, or comma-separated list.
Passing a true value to clear_meta clears the metadata stored in Cloud
Files before setting the new metadata to the value of "meta".
"""
if src and dest:
module.fail_json(msg="Error: ambiguous instructions; files to set meta"
" have been specified on both src and dest args")
objs = dest or src
objs = map(str.strip, objs.split(','))
c = _get_container(module, cf, container)
try:
results = [c.get_object(obj).set_metadata(meta, clear=clear_meta) for obj in objs]
except Exception as e:
module.fail_json(msg=e.message)
EXIT_DICT['container'] = c.name
EXIT_DICT['success'] = True
if results:
EXIT_DICT['changed'] = True
EXIT_DICT['num_changed'] = True
module.exit_json(**EXIT_DICT)
def delete_meta(module, cf, container, src, dest, meta):
""" Removes metadata keys and values specified in meta, if any. Deletes on
all objects specified by src or dest (but not both), if any; otherwise it
deletes keys on all objects in the container
"""
if src and dest:
module.fail_json(msg="Error: ambiguous instructions; meta keys to be "
"deleted have been specified on both src and dest"
" args")
objs = dest or src
objs = map(str.strip, objs.split(','))
c = _get_container(module, cf, container)
try:
for obj in objs:
o = c.get_object(obj)
results = [
o.remove_metadata_key(k)
for k in (meta or o.get_metadata())
]
except Exception as e:
module.fail_json(msg=e.message)
EXIT_DICT['container'] = c.name
EXIT_DICT['success'] = True
if results:
EXIT_DICT['changed'] = True
EXIT_DICT['num_deleted'] = len(results)
module.exit_json(**EXIT_DICT)
def cloudfiles(module, container, src, dest, method, typ, meta, clear_meta,
structure, expires):
""" Dispatch from here to work with metadata or file objects """
cf = pyrax.cloudfiles
if cf is None:
module.fail_json(msg='Failed to instantiate client. This '
'typically indicates an invalid region or an '
'incorrectly capitalized region name.')
if typ == "file":
if method == 'get':
download(module, cf, container, src, dest, structure)
if method == 'put':
upload(module, cf, container, src, dest, meta, expires)
if method == 'delete':
delete(module, cf, container, src, dest)
else:
if method == 'get':
get_meta(module, cf, container, src, dest)
if method == 'put':
put_meta(module, cf, container, src, dest, meta, clear_meta)
if method == 'delete':
delete_meta(module, cf, container, src, dest, meta)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
container=dict(required=True),
src=dict(),
dest=dict(),
method=dict(default='get', choices=['put', 'get', 'delete']),
type=dict(default='file', choices=['file', 'meta']),
meta=dict(type='dict', default=dict()),
clear_meta=dict(default=False, type='bool'),
structure=dict(default=True, type='bool'),
expires=dict(type='int'),
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together(),
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
container = module.params.get('container')
src = module.params.get('src')
dest = module.params.get('dest')
method = module.params.get('method')
typ = module.params.get('type')
meta = module.params.get('meta')
clear_meta = module.params.get('clear_meta')
structure = module.params.get('structure')
expires = module.params.get('expires')
if clear_meta and not typ == 'meta':
module.fail_json(msg='clear_meta can only be used when setting metadata')
setup_rax_module(module, pyrax)
cloudfiles(module, container, src, dest, method, typ, meta, clear_meta, structure, expires)
if __name__ == '__main__':
main()

View file

@ -1,110 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: rax_identity
short_description: Load Rackspace Cloud Identity
description:
- Verifies Rackspace Cloud credentials and returns identity information
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
state:
type: str
description:
- Indicate desired state of the resource
choices: ['present']
default: present
required: false
author:
- "Christopher H. Laco (@claco)"
- "Matt Martz (@sivel)"
extends_documentation_fragment:
- community.general.rackspace.openstack
- community.general.attributes
'''
EXAMPLES = '''
- name: Load Rackspace Cloud Identity
gather_facts: false
hosts: local
connection: local
tasks:
- name: Load Identity
local_action:
module: rax_identity
credentials: ~/.raxpub
region: DFW
register: rackspace_identity
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.rax import (rax_argument_spec, rax_required_together, rax_to_dict,
setup_rax_module)
def cloud_identity(module, state, identity):
instance = dict(
authenticated=identity.authenticated,
credentials=identity._creds_file
)
changed = False
instance.update(rax_to_dict(identity))
instance['services'] = instance.get('services', {}).keys()
if state == 'present':
if not identity.authenticated:
module.fail_json(msg='Credentials could not be verified!')
module.exit_json(changed=changed, identity=instance)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
state=dict(default='present', choices=['present'])
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together()
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
state = module.params.get('state')
setup_rax_module(module, pyrax)
if not pyrax.identity:
module.fail_json(msg='Failed to instantiate client. This '
'typically indicates an invalid region or an '
'incorrectly capitalized region name.')
cloud_identity(module, state, pyrax.identity)
if __name__ == '__main__':
main()

View file

@ -1,179 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: rax_keypair
short_description: Create a keypair for use with Rackspace Cloud Servers
description:
- Create a keypair for use with Rackspace Cloud Servers
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
name:
type: str
description:
- Name of keypair
required: true
public_key:
type: str
description:
- Public Key string to upload. Can be a file path or string
state:
type: str
description:
- Indicate desired state of the resource
choices:
- present
- absent
default: present
author: "Matt Martz (@sivel)"
notes:
- Keypairs cannot be manipulated, only created and deleted. To "update" a
keypair you must first delete and then recreate.
- The ability to specify a file path for the public key was added in 1.7
extends_documentation_fragment:
- community.general.rackspace.openstack
- community.general.attributes
'''
EXAMPLES = '''
- name: Create a keypair
hosts: localhost
gather_facts: false
tasks:
- name: Keypair request
local_action:
module: rax_keypair
credentials: ~/.raxpub
name: my_keypair
region: DFW
register: keypair
- name: Create local public key
local_action:
module: copy
content: "{{ keypair.keypair.public_key }}"
dest: "{{ inventory_dir }}/{{ keypair.keypair.name }}.pub"
- name: Create local private key
local_action:
module: copy
content: "{{ keypair.keypair.private_key }}"
dest: "{{ inventory_dir }}/{{ keypair.keypair.name }}"
- name: Create a keypair
hosts: localhost
gather_facts: false
tasks:
- name: Keypair request
local_action:
module: rax_keypair
credentials: ~/.raxpub
name: my_keypair
public_key: "{{ lookup('file', 'authorized_keys/id_rsa.pub') }}"
region: DFW
register: keypair
'''
import os
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.rax import (rax_argument_spec,
rax_required_together,
rax_to_dict,
setup_rax_module,
)
def rax_keypair(module, name, public_key, state):
changed = False
cs = pyrax.cloudservers
if cs is None:
module.fail_json(msg='Failed to instantiate client. This '
'typically indicates an invalid region or an '
'incorrectly capitalized region name.')
keypair = {}
if state == 'present':
if public_key and os.path.isfile(public_key):
try:
f = open(public_key)
public_key = f.read()
f.close()
except Exception as e:
module.fail_json(msg='Failed to load %s' % public_key)
try:
keypair = cs.keypairs.find(name=name)
except cs.exceptions.NotFound:
try:
keypair = cs.keypairs.create(name, public_key)
changed = True
except Exception as e:
module.fail_json(msg='%s' % e.message)
except Exception as e:
module.fail_json(msg='%s' % e.message)
elif state == 'absent':
try:
keypair = cs.keypairs.find(name=name)
except Exception:
pass
if keypair:
try:
keypair.delete()
changed = True
except Exception as e:
module.fail_json(msg='%s' % e.message)
module.exit_json(changed=changed, keypair=rax_to_dict(keypair))
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
name=dict(required=True),
public_key=dict(),
state=dict(default='present', choices=['absent', 'present']),
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together(),
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
name = module.params.get('name')
public_key = module.params.get('public_key')
state = module.params.get('state')
setup_rax_module(module, pyrax)
rax_keypair(module, name, public_key, state)
if __name__ == '__main__':
main()

View file

@ -1,182 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: rax_meta
short_description: Manipulate metadata for Rackspace Cloud Servers
description:
- Manipulate metadata for Rackspace Cloud Servers
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
address:
type: str
description:
- Server IP address to modify metadata for, will match any IP assigned to
the server
id:
type: str
description:
- Server ID to modify metadata for
name:
type: str
description:
- Server name to modify metadata for
meta:
type: dict
default: {}
description:
- A hash of metadata to associate with the instance
author: "Matt Martz (@sivel)"
extends_documentation_fragment:
- community.general.rackspace.openstack
- community.general.attributes
'''
EXAMPLES = '''
- name: Set metadata for a server
hosts: all
gather_facts: false
tasks:
- name: Set metadata
local_action:
module: rax_meta
credentials: ~/.raxpub
name: "{{ inventory_hostname }}"
region: DFW
meta:
group: primary_group
groups:
- group_two
- group_three
app: my_app
- name: Clear metadata
local_action:
module: rax_meta
credentials: ~/.raxpub
name: "{{ inventory_hostname }}"
region: DFW
'''
import json
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.rax import rax_argument_spec, rax_required_together, setup_rax_module
from ansible.module_utils.six import string_types
def rax_meta(module, address, name, server_id, meta):
changed = False
cs = pyrax.cloudservers
if cs is None:
module.fail_json(msg='Failed to instantiate client. This '
'typically indicates an invalid region or an '
'incorrectly capitalized region name.')
search_opts = {}
if name:
search_opts = dict(name='^%s$' % name)
try:
servers = cs.servers.list(search_opts=search_opts)
except Exception as e:
module.fail_json(msg='%s' % e.message)
elif address:
servers = []
try:
for server in cs.servers.list():
for addresses in server.networks.values():
if address in addresses:
servers.append(server)
break
except Exception as e:
module.fail_json(msg='%s' % e.message)
elif server_id:
servers = []
try:
servers.append(cs.servers.get(server_id))
except Exception as e:
pass
if len(servers) > 1:
module.fail_json(msg='Multiple servers found matching provided '
'search parameters')
elif not servers:
module.fail_json(msg='Failed to find a server matching provided '
'search parameters')
# Normalize and ensure all metadata values are strings
for k, v in meta.items():
if isinstance(v, list):
meta[k] = ','.join(['%s' % i for i in v])
elif isinstance(v, dict):
meta[k] = json.dumps(v)
elif not isinstance(v, string_types):
meta[k] = '%s' % v
server = servers[0]
if server.metadata == meta:
changed = False
else:
changed = True
removed = set(server.metadata.keys()).difference(meta.keys())
cs.servers.delete_meta(server, list(removed))
cs.servers.set_meta(server, meta)
server.get()
module.exit_json(changed=changed, meta=server.metadata)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
address=dict(),
id=dict(),
name=dict(),
meta=dict(type='dict', default=dict()),
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together(),
mutually_exclusive=[['address', 'id', 'name']],
required_one_of=[['address', 'id', 'name']],
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
address = module.params.get('address')
server_id = module.params.get('id')
name = module.params.get('name')
meta = module.params.get('meta')
setup_rax_module(module, pyrax)
rax_meta(module, address, name, server_id, meta)
if __name__ == '__main__':
main()

View file

@ -1,235 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: rax_mon_alarm
short_description: Create or delete a Rackspace Cloud Monitoring alarm
description:
- Create or delete a Rackspace Cloud Monitoring alarm that associates an
existing rax_mon_entity, rax_mon_check, and rax_mon_notification_plan with
criteria that specify what conditions will trigger which levels of
notifications. Rackspace monitoring module flow | rax_mon_entity ->
rax_mon_check -> rax_mon_notification -> rax_mon_notification_plan ->
*rax_mon_alarm*
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
state:
type: str
description:
- Ensure that the alarm with this O(label) exists or does not exist.
choices: [ "present", "absent" ]
required: false
default: present
label:
type: str
description:
- Friendly name for this alarm, used to achieve idempotence. Must be a String
between 1 and 255 characters long.
required: true
entity_id:
type: str
description:
- ID of the entity this alarm is attached to. May be acquired by registering
the value of a rax_mon_entity task.
required: true
check_id:
type: str
description:
- ID of the check that should be alerted on. May be acquired by registering
the value of a rax_mon_check task.
required: true
notification_plan_id:
type: str
description:
- ID of the notification plan to trigger if this alarm fires. May be acquired
by registering the value of a rax_mon_notification_plan task.
required: true
criteria:
type: str
description:
- Alarm DSL that describes alerting conditions and their output states. Must
be between 1 and 16384 characters long. See
http://docs.rackspace.com/cm/api/v1.0/cm-devguide/content/alerts-language.html
for a reference on the alerting language.
disabled:
description:
- If yes, create this alarm, but leave it in an inactive state. Defaults to
no.
type: bool
default: false
metadata:
type: dict
description:
- Arbitrary key/value pairs to accompany the alarm. Must be a hash of String
keys and values between 1 and 255 characters long.
author: Ash Wilson (@smashwilson)
extends_documentation_fragment:
- community.general.rackspace.openstack
- community.general.attributes
'''
EXAMPLES = '''
- name: Alarm example
gather_facts: false
hosts: local
connection: local
tasks:
- name: Ensure that a specific alarm exists.
community.general.rax_mon_alarm:
credentials: ~/.rax_pub
state: present
label: uhoh
entity_id: "{{ the_entity['entity']['id'] }}"
check_id: "{{ the_check['check']['id'] }}"
notification_plan_id: "{{ defcon1['notification_plan']['id'] }}"
criteria: >
if (rate(metric['average']) > 10) {
return new AlarmStatus(WARNING);
}
return new AlarmStatus(OK);
register: the_alarm
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.rax import rax_argument_spec, rax_required_together, setup_rax_module
def alarm(module, state, label, entity_id, check_id, notification_plan_id, criteria,
disabled, metadata):
if len(label) < 1 or len(label) > 255:
module.fail_json(msg='label must be between 1 and 255 characters long')
if criteria and len(criteria) < 1 or len(criteria) > 16384:
module.fail_json(msg='criteria must be between 1 and 16384 characters long')
# Coerce attributes.
changed = False
alarm = None
cm = pyrax.cloud_monitoring
if not cm:
module.fail_json(msg='Failed to instantiate client. This typically '
'indicates an invalid region or an incorrectly '
'capitalized region name.')
existing = [a for a in cm.list_alarms(entity_id) if a.label == label]
if existing:
alarm = existing[0]
if state == 'present':
should_create = False
should_update = False
should_delete = False
if len(existing) > 1:
module.fail_json(msg='%s existing alarms have the label %s.' %
(len(existing), label))
if alarm:
if check_id != alarm.check_id or notification_plan_id != alarm.notification_plan_id:
should_delete = should_create = True
should_update = (disabled and disabled != alarm.disabled) or \
(metadata and metadata != alarm.metadata) or \
(criteria and criteria != alarm.criteria)
if should_update and not should_delete:
cm.update_alarm(entity=entity_id, alarm=alarm,
criteria=criteria, disabled=disabled,
label=label, metadata=metadata)
changed = True
if should_delete:
alarm.delete()
changed = True
else:
should_create = True
if should_create:
alarm = cm.create_alarm(entity=entity_id, check=check_id,
notification_plan=notification_plan_id,
criteria=criteria, disabled=disabled, label=label,
metadata=metadata)
changed = True
else:
for a in existing:
a.delete()
changed = True
if alarm:
alarm_dict = {
"id": alarm.id,
"label": alarm.label,
"check_id": alarm.check_id,
"notification_plan_id": alarm.notification_plan_id,
"criteria": alarm.criteria,
"disabled": alarm.disabled,
"metadata": alarm.metadata
}
module.exit_json(changed=changed, alarm=alarm_dict)
else:
module.exit_json(changed=changed)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
state=dict(default='present', choices=['present', 'absent']),
label=dict(required=True),
entity_id=dict(required=True),
check_id=dict(required=True),
notification_plan_id=dict(required=True),
criteria=dict(),
disabled=dict(type='bool', default=False),
metadata=dict(type='dict')
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together()
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
state = module.params.get('state')
label = module.params.get('label')
entity_id = module.params.get('entity_id')
check_id = module.params.get('check_id')
notification_plan_id = module.params.get('notification_plan_id')
criteria = module.params.get('criteria')
disabled = module.boolean(module.params.get('disabled'))
metadata = module.params.get('metadata')
setup_rax_module(module, pyrax)
alarm(module, state, label, entity_id, check_id, notification_plan_id,
criteria, disabled, metadata)
if __name__ == '__main__':
main()

View file

@ -1,329 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: rax_mon_check
short_description: Create or delete a Rackspace Cloud Monitoring check for an
existing entity.
description:
- Create or delete a Rackspace Cloud Monitoring check associated with an
existing rax_mon_entity. A check is a specific test or measurement that is
performed, possibly from different monitoring zones, on the systems you
monitor. Rackspace monitoring module flow | rax_mon_entity ->
*rax_mon_check* -> rax_mon_notification -> rax_mon_notification_plan ->
rax_mon_alarm
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
state:
type: str
description:
- Ensure that a check with this O(label) exists or does not exist.
choices: ["present", "absent"]
default: present
entity_id:
type: str
description:
- ID of the rax_mon_entity to target with this check.
required: true
label:
type: str
description:
- Defines a label for this check, between 1 and 64 characters long.
required: true
check_type:
type: str
description:
- The type of check to create. C(remote.) checks may be created on any
rax_mon_entity. C(agent.) checks may only be created on rax_mon_entities
that have a non-null C(agent_id).
- |
Choices for this option are:
- V(remote.dns)
- V(remote.ftp-banner)
- V(remote.http)
- V(remote.imap-banner)
- V(remote.mssql-banner)
- V(remote.mysql-banner)
- V(remote.ping)
- V(remote.pop3-banner)
- V(remote.postgresql-banner)
- V(remote.smtp-banner)
- V(remote.smtp)
- V(remote.ssh)
- V(remote.tcp)
- V(remote.telnet-banner)
- V(agent.filesystem)
- V(agent.memory)
- V(agent.load_average)
- V(agent.cpu)
- V(agent.disk)
- V(agent.network)
- V(agent.plugin)
required: true
monitoring_zones_poll:
type: str
description:
- Comma-separated list of the names of the monitoring zones the check should
run from. Available monitoring zones include mzdfw, mzhkg, mziad, mzlon,
mzord and mzsyd. Required for remote.* checks; prohibited for agent.* checks.
target_hostname:
type: str
description:
- One of O(target_hostname) and O(target_alias) is required for remote.* checks,
but prohibited for agent.* checks. The hostname this check should target.
Must be a valid IPv4, IPv6, or FQDN.
target_alias:
type: str
description:
- One of O(target_alias) and O(target_hostname) is required for remote.* checks,
but prohibited for agent.* checks. Use the corresponding key in the entity's
C(ip_addresses) hash to resolve an IP address to target.
details:
type: dict
default: {}
description:
- Additional details specific to the check type. Must be a hash of strings
between 1 and 255 characters long, or an array or object containing 0 to
256 items.
disabled:
description:
- If V(true), ensure the check is created, but don't actually use it yet.
type: bool
default: false
metadata:
type: dict
default: {}
description:
- Hash of arbitrary key-value pairs to accompany this check if it fires.
Keys and values must be strings between 1 and 255 characters long.
period:
type: int
description:
- The number of seconds between each time the check is performed. Must be
greater than the minimum period set on your account.
timeout:
type: int
description:
- The number of seconds this check will wait when attempting to collect
results. Must be less than the period.
author: Ash Wilson (@smashwilson)
extends_documentation_fragment:
- community.general.rackspace.openstack
- community.general.attributes
'''
EXAMPLES = '''
- name: Create a monitoring check
gather_facts: false
hosts: local
connection: local
tasks:
- name: Associate a check with an existing entity.
community.general.rax_mon_check:
credentials: ~/.rax_pub
state: present
entity_id: "{{ the_entity['entity']['id'] }}"
label: the_check
check_type: remote.ping
monitoring_zones_poll: mziad,mzord,mzdfw
details:
count: 10
meta:
hurf: durf
register: the_check
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.rax import rax_argument_spec, rax_required_together, setup_rax_module
def cloud_check(module, state, entity_id, label, check_type,
monitoring_zones_poll, target_hostname, target_alias, details,
disabled, metadata, period, timeout):
# Coerce attributes.
if monitoring_zones_poll and not isinstance(monitoring_zones_poll, list):
monitoring_zones_poll = [monitoring_zones_poll]
if period:
period = int(period)
if timeout:
timeout = int(timeout)
changed = False
check = None
cm = pyrax.cloud_monitoring
if not cm:
module.fail_json(msg='Failed to instantiate client. This typically '
'indicates an invalid region or an incorrectly '
'capitalized region name.')
entity = cm.get_entity(entity_id)
if not entity:
module.fail_json(msg='Failed to instantiate entity. "%s" may not be'
' a valid entity id.' % entity_id)
existing = [e for e in entity.list_checks() if e.label == label]
if existing:
check = existing[0]
if state == 'present':
if len(existing) > 1:
module.fail_json(msg='%s existing checks have a label of %s.' %
(len(existing), label))
should_delete = False
should_create = False
should_update = False
if check:
# Details may include keys set to default values that are not
# included in the initial creation.
#
# Only force a recreation of the check if one of the *specified*
# keys is missing or has a different value.
if details:
for (key, value) in details.items():
if key not in check.details:
should_delete = should_create = True
elif value != check.details[key]:
should_delete = should_create = True
should_update = label != check.label or \
(target_hostname and target_hostname != check.target_hostname) or \
(target_alias and target_alias != check.target_alias) or \
(disabled != check.disabled) or \
(metadata and metadata != check.metadata) or \
(period and period != check.period) or \
(timeout and timeout != check.timeout) or \
(monitoring_zones_poll and monitoring_zones_poll != check.monitoring_zones_poll)
if should_update and not should_delete:
check.update(label=label,
disabled=disabled,
metadata=metadata,
monitoring_zones_poll=monitoring_zones_poll,
timeout=timeout,
period=period,
target_alias=target_alias,
target_hostname=target_hostname)
changed = True
else:
# The check doesn't exist yet.
should_create = True
if should_delete:
check.delete()
if should_create:
check = cm.create_check(entity,
label=label,
check_type=check_type,
target_hostname=target_hostname,
target_alias=target_alias,
monitoring_zones_poll=monitoring_zones_poll,
details=details,
disabled=disabled,
metadata=metadata,
period=period,
timeout=timeout)
changed = True
elif state == 'absent':
if check:
check.delete()
changed = True
else:
module.fail_json(msg='state must be either present or absent.')
if check:
check_dict = {
"id": check.id,
"label": check.label,
"type": check.type,
"target_hostname": check.target_hostname,
"target_alias": check.target_alias,
"monitoring_zones_poll": check.monitoring_zones_poll,
"details": check.details,
"disabled": check.disabled,
"metadata": check.metadata,
"period": check.period,
"timeout": check.timeout
}
module.exit_json(changed=changed, check=check_dict)
else:
module.exit_json(changed=changed)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
entity_id=dict(required=True),
label=dict(required=True),
check_type=dict(required=True),
monitoring_zones_poll=dict(),
target_hostname=dict(),
target_alias=dict(),
details=dict(type='dict', default={}),
disabled=dict(type='bool', default=False),
metadata=dict(type='dict', default={}),
period=dict(type='int'),
timeout=dict(type='int'),
state=dict(default='present', choices=['present', 'absent'])
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together()
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
entity_id = module.params.get('entity_id')
label = module.params.get('label')
check_type = module.params.get('check_type')
monitoring_zones_poll = module.params.get('monitoring_zones_poll')
target_hostname = module.params.get('target_hostname')
target_alias = module.params.get('target_alias')
details = module.params.get('details')
disabled = module.boolean(module.params.get('disabled'))
metadata = module.params.get('metadata')
period = module.params.get('period')
timeout = module.params.get('timeout')
state = module.params.get('state')
setup_rax_module(module, pyrax)
cloud_check(module, state, entity_id, label, check_type,
monitoring_zones_poll, target_hostname, target_alias, details,
disabled, metadata, period, timeout)
if __name__ == '__main__':
main()

View file

@ -1,201 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: rax_mon_entity
short_description: Create or delete a Rackspace Cloud Monitoring entity
description:
- Create or delete a Rackspace Cloud Monitoring entity, which represents a device
to monitor. Entities associate checks and alarms with a target system and
provide a convenient, centralized place to store IP addresses. Rackspace
monitoring module flow | *rax_mon_entity* -> rax_mon_check ->
rax_mon_notification -> rax_mon_notification_plan -> rax_mon_alarm
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
label:
type: str
description:
- Defines a name for this entity. Must be a non-empty string between 1 and
255 characters long.
required: true
state:
type: str
description:
- Ensure that an entity with this C(name) exists or does not exist.
choices: ["present", "absent"]
default: present
agent_id:
type: str
description:
- Rackspace monitoring agent on the target device to which this entity is
bound. Necessary to collect C(agent.) rax_mon_checks against this entity.
named_ip_addresses:
type: dict
default: {}
description:
- Hash of IP addresses that may be referenced by name by rax_mon_checks
added to this entity. Must be a dictionary of with keys that are names
between 1 and 64 characters long, and values that are valid IPv4 or IPv6
addresses.
metadata:
type: dict
default: {}
description:
- Hash of arbitrary C(name), C(value) pairs that are passed to associated
rax_mon_alarms. Names and values must all be between 1 and 255 characters
long.
author: Ash Wilson (@smashwilson)
extends_documentation_fragment:
- community.general.rackspace.openstack
- community.general.attributes
'''
EXAMPLES = '''
- name: Entity example
gather_facts: false
hosts: local
connection: local
tasks:
- name: Ensure an entity exists
community.general.rax_mon_entity:
credentials: ~/.rax_pub
state: present
label: my_entity
named_ip_addresses:
web_box: 192.0.2.4
db_box: 192.0.2.5
meta:
hurf: durf
register: the_entity
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.rax import rax_argument_spec, rax_required_together, setup_rax_module
def cloud_monitoring(module, state, label, agent_id, named_ip_addresses,
metadata):
if len(label) < 1 or len(label) > 255:
module.fail_json(msg='label must be between 1 and 255 characters long')
changed = False
cm = pyrax.cloud_monitoring
if not cm:
module.fail_json(msg='Failed to instantiate client. This typically '
'indicates an invalid region or an incorrectly '
'capitalized region name.')
existing = []
for entity in cm.list_entities():
if label == entity.label:
existing.append(entity)
entity = None
if existing:
entity = existing[0]
if state == 'present':
should_update = False
should_delete = False
should_create = False
if len(existing) > 1:
module.fail_json(msg='%s existing entities have the label %s.' %
(len(existing), label))
if entity:
if named_ip_addresses and named_ip_addresses != entity.ip_addresses:
should_delete = should_create = True
# Change an existing Entity, unless there's nothing to do.
should_update = agent_id and agent_id != entity.agent_id or \
(metadata and metadata != entity.metadata)
if should_update and not should_delete:
entity.update(agent_id, metadata)
changed = True
if should_delete:
entity.delete()
else:
should_create = True
if should_create:
# Create a new Entity.
entity = cm.create_entity(label=label, agent=agent_id,
ip_addresses=named_ip_addresses,
metadata=metadata)
changed = True
else:
# Delete the existing Entities.
for e in existing:
e.delete()
changed = True
if entity:
entity_dict = {
"id": entity.id,
"name": entity.name,
"agent_id": entity.agent_id,
}
module.exit_json(changed=changed, entity=entity_dict)
else:
module.exit_json(changed=changed)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
state=dict(default='present', choices=['present', 'absent']),
label=dict(required=True),
agent_id=dict(),
named_ip_addresses=dict(type='dict', default={}),
metadata=dict(type='dict', default={})
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together()
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
state = module.params.get('state')
label = module.params.get('label')
agent_id = module.params.get('agent_id')
named_ip_addresses = module.params.get('named_ip_addresses')
metadata = module.params.get('metadata')
setup_rax_module(module, pyrax)
cloud_monitoring(module, state, label, agent_id, named_ip_addresses, metadata)
if __name__ == '__main__':
main()

View file

@ -1,182 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: rax_mon_notification
short_description: Create or delete a Rackspace Cloud Monitoring notification
description:
- Create or delete a Rackspace Cloud Monitoring notification that specifies a
channel that can be used to communicate alarms, such as email, webhooks, or
PagerDuty. Rackspace monitoring module flow | rax_mon_entity -> rax_mon_check ->
*rax_mon_notification* -> rax_mon_notification_plan -> rax_mon_alarm
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
state:
type: str
description:
- Ensure that the notification with this O(label) exists or does not exist.
choices: ['present', 'absent']
default: present
label:
type: str
description:
- Defines a friendly name for this notification. String between 1 and 255
characters long.
required: true
notification_type:
type: str
description:
- A supported notification type.
choices: ["webhook", "email", "pagerduty"]
required: true
details:
type: dict
description:
- Dictionary of key-value pairs used to initialize the notification.
Required keys and meanings vary with notification type. See
http://docs.rackspace.com/cm/api/v1.0/cm-devguide/content/
service-notification-types-crud.html for details.
required: true
author: Ash Wilson (@smashwilson)
extends_documentation_fragment:
- community.general.rackspace.openstack
- community.general.attributes
'''
EXAMPLES = '''
- name: Monitoring notification example
gather_facts: false
hosts: local
connection: local
tasks:
- name: Email me when something goes wrong.
rax_mon_entity:
credentials: ~/.rax_pub
label: omg
type: email
details:
address: me@mailhost.com
register: the_notification
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.rax import rax_argument_spec, rax_required_together, setup_rax_module
def notification(module, state, label, notification_type, details):
if len(label) < 1 or len(label) > 255:
module.fail_json(msg='label must be between 1 and 255 characters long')
changed = False
notification = None
cm = pyrax.cloud_monitoring
if not cm:
module.fail_json(msg='Failed to instantiate client. This typically '
'indicates an invalid region or an incorrectly '
'capitalized region name.')
existing = []
for n in cm.list_notifications():
if n.label == label:
existing.append(n)
if existing:
notification = existing[0]
if state == 'present':
should_update = False
should_delete = False
should_create = False
if len(existing) > 1:
module.fail_json(msg='%s existing notifications are labelled %s.' %
(len(existing), label))
if notification:
should_delete = (notification_type != notification.type)
should_update = (details != notification.details)
if should_update and not should_delete:
notification.update(details=notification.details)
changed = True
if should_delete:
notification.delete()
else:
should_create = True
if should_create:
notification = cm.create_notification(notification_type,
label=label, details=details)
changed = True
else:
for n in existing:
n.delete()
changed = True
if notification:
notification_dict = {
"id": notification.id,
"type": notification.type,
"label": notification.label,
"details": notification.details
}
module.exit_json(changed=changed, notification=notification_dict)
else:
module.exit_json(changed=changed)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
state=dict(default='present', choices=['present', 'absent']),
label=dict(required=True),
notification_type=dict(required=True, choices=['webhook', 'email', 'pagerduty']),
details=dict(required=True, type='dict')
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together()
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
state = module.params.get('state')
label = module.params.get('label')
notification_type = module.params.get('notification_type')
details = module.params.get('details')
setup_rax_module(module, pyrax)
notification(module, state, label, notification_type, details)
if __name__ == '__main__':
main()

View file

@ -1,191 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: rax_mon_notification_plan
short_description: Create or delete a Rackspace Cloud Monitoring notification
plan.
description:
- Create or delete a Rackspace Cloud Monitoring notification plan by
associating existing rax_mon_notifications with severity levels. Rackspace
monitoring module flow | rax_mon_entity -> rax_mon_check ->
rax_mon_notification -> *rax_mon_notification_plan* -> rax_mon_alarm
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
state:
type: str
description:
- Ensure that the notification plan with this O(label) exists or does not
exist.
choices: ['present', 'absent']
default: present
label:
type: str
description:
- Defines a friendly name for this notification plan. String between 1 and
255 characters long.
required: true
critical_state:
type: list
elements: str
description:
- Notification list to use when the alarm state is CRITICAL. Must be an
array of valid rax_mon_notification ids.
warning_state:
type: list
elements: str
description:
- Notification list to use when the alarm state is WARNING. Must be an array
of valid rax_mon_notification ids.
ok_state:
type: list
elements: str
description:
- Notification list to use when the alarm state is OK. Must be an array of
valid rax_mon_notification ids.
author: Ash Wilson (@smashwilson)
extends_documentation_fragment:
- community.general.rackspace.openstack
- community.general.attributes
'''
EXAMPLES = '''
- name: Example notification plan
gather_facts: false
hosts: local
connection: local
tasks:
- name: Establish who gets called when.
community.general.rax_mon_notification_plan:
credentials: ~/.rax_pub
state: present
label: defcon1
critical_state:
- "{{ everyone['notification']['id'] }}"
warning_state:
- "{{ opsfloor['notification']['id'] }}"
register: defcon1
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.rax import rax_argument_spec, rax_required_together, setup_rax_module
def notification_plan(module, state, label, critical_state, warning_state, ok_state):
if len(label) < 1 or len(label) > 255:
module.fail_json(msg='label must be between 1 and 255 characters long')
changed = False
notification_plan = None
cm = pyrax.cloud_monitoring
if not cm:
module.fail_json(msg='Failed to instantiate client. This typically '
'indicates an invalid region or an incorrectly '
'capitalized region name.')
existing = []
for n in cm.list_notification_plans():
if n.label == label:
existing.append(n)
if existing:
notification_plan = existing[0]
if state == 'present':
should_create = False
should_delete = False
if len(existing) > 1:
module.fail_json(msg='%s notification plans are labelled %s.' %
(len(existing), label))
if notification_plan:
should_delete = (critical_state and critical_state != notification_plan.critical_state) or \
(warning_state and warning_state != notification_plan.warning_state) or \
(ok_state and ok_state != notification_plan.ok_state)
if should_delete:
notification_plan.delete()
should_create = True
else:
should_create = True
if should_create:
notification_plan = cm.create_notification_plan(label=label,
critical_state=critical_state,
warning_state=warning_state,
ok_state=ok_state)
changed = True
else:
for np in existing:
np.delete()
changed = True
if notification_plan:
notification_plan_dict = {
"id": notification_plan.id,
"critical_state": notification_plan.critical_state,
"warning_state": notification_plan.warning_state,
"ok_state": notification_plan.ok_state,
"metadata": notification_plan.metadata
}
module.exit_json(changed=changed, notification_plan=notification_plan_dict)
else:
module.exit_json(changed=changed)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
state=dict(default='present', choices=['present', 'absent']),
label=dict(required=True),
critical_state=dict(type='list', elements='str'),
warning_state=dict(type='list', elements='str'),
ok_state=dict(type='list', elements='str'),
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together()
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
state = module.params.get('state')
label = module.params.get('label')
critical_state = module.params.get('critical_state')
warning_state = module.params.get('warning_state')
ok_state = module.params.get('ok_state')
setup_rax_module(module, pyrax)
notification_plan(module, state, label, critical_state, warning_state, ok_state)
if __name__ == '__main__':
main()

View file

@ -1,146 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: rax_network
short_description: Create / delete an isolated network in Rackspace Public Cloud
description:
- creates / deletes a Rackspace Public Cloud isolated network.
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
state:
type: str
description:
- Indicate desired state of the resource
choices:
- present
- absent
default: present
label:
type: str
description:
- Label (name) to give the network
required: true
cidr:
type: str
description:
- cidr of the network being created
author:
- "Christopher H. Laco (@claco)"
- "Jesse Keating (@omgjlk)"
extends_documentation_fragment:
- community.general.rackspace.openstack
- community.general.attributes
'''
EXAMPLES = '''
- name: Build an Isolated Network
gather_facts: false
tasks:
- name: Network create request
local_action:
module: rax_network
credentials: ~/.raxpub
label: my-net
cidr: 192.168.3.0/24
state: present
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.rax import rax_argument_spec, rax_required_together, setup_rax_module
def cloud_network(module, state, label, cidr):
changed = False
network = None
networks = []
if not pyrax.cloud_networks:
module.fail_json(msg='Failed to instantiate client. This '
'typically indicates an invalid region or an '
'incorrectly capitalized region name.')
if state == 'present':
if not cidr:
module.fail_json(msg='missing required arguments: cidr')
try:
network = pyrax.cloud_networks.find_network_by_label(label)
except pyrax.exceptions.NetworkNotFound:
try:
network = pyrax.cloud_networks.create(label, cidr=cidr)
changed = True
except Exception as e:
module.fail_json(msg='%s' % e.message)
except Exception as e:
module.fail_json(msg='%s' % e.message)
elif state == 'absent':
try:
network = pyrax.cloud_networks.find_network_by_label(label)
network.delete()
changed = True
except pyrax.exceptions.NetworkNotFound:
pass
except Exception as e:
module.fail_json(msg='%s' % e.message)
if network:
instance = dict(id=network.id,
label=network.label,
cidr=network.cidr)
networks.append(instance)
module.exit_json(changed=changed, networks=networks)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
state=dict(default='present',
choices=['present', 'absent']),
label=dict(required=True),
cidr=dict()
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together(),
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
state = module.params.get('state')
label = module.params.get('label')
cidr = module.params.get('cidr')
setup_rax_module(module, pyrax)
cloud_network(module, state, label, cidr)
if __name__ == '__main__':
main()

View file

@ -1,147 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: rax_queue
short_description: Create / delete a queue in Rackspace Public Cloud
description:
- creates / deletes a Rackspace Public Cloud queue.
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
name:
type: str
description:
- Name to give the queue
state:
type: str
description:
- Indicate desired state of the resource
choices:
- present
- absent
default: present
author:
- "Christopher H. Laco (@claco)"
- "Matt Martz (@sivel)"
extends_documentation_fragment:
- community.general.rackspace
- community.general.rackspace.openstack
- community.general.attributes
'''
EXAMPLES = '''
- name: Build a Queue
gather_facts: false
hosts: local
connection: local
tasks:
- name: Queue create request
local_action:
module: rax_queue
credentials: ~/.raxpub
name: my-queue
region: DFW
state: present
register: my_queue
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.rax import rax_argument_spec, rax_required_together, setup_rax_module
def cloud_queue(module, state, name):
for arg in (state, name):
if not arg:
module.fail_json(msg='%s is required for rax_queue' % arg)
changed = False
queues = []
instance = {}
cq = pyrax.queues
if not cq:
module.fail_json(msg='Failed to instantiate client. This '
'typically indicates an invalid region or an '
'incorrectly capitalized region name.')
for queue in cq.list():
if name != queue.name:
continue
queues.append(queue)
if len(queues) > 1:
module.fail_json(msg='Multiple Queues were matched by name')
if state == 'present':
if not queues:
try:
queue = cq.create(name)
changed = True
except Exception as e:
module.fail_json(msg='%s' % e.message)
else:
queue = queues[0]
instance = dict(name=queue.name)
result = dict(changed=changed, queue=instance)
module.exit_json(**result)
elif state == 'absent':
if queues:
queue = queues[0]
try:
queue.delete()
changed = True
except Exception as e:
module.fail_json(msg='%s' % e.message)
module.exit_json(changed=changed, queue=instance)
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
name=dict(),
state=dict(default='present', choices=['present', 'absent']),
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together()
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
name = module.params.get('name')
state = module.params.get('state')
setup_rax_module(module, pyrax)
cloud_queue(module, state, name)
if __name__ == '__main__':
main()

View file

@ -1,441 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: rax_scaling_group
short_description: Manipulate Rackspace Cloud Autoscale Groups
description:
- Manipulate Rackspace Cloud Autoscale Groups
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
config_drive:
description:
- Attach read-only configuration drive to server as label config-2
type: bool
default: false
cooldown:
type: int
description:
- The period of time, in seconds, that must pass before any scaling can
occur after the previous scaling. Must be an integer between 0 and
86400 (24 hrs).
default: 300
disk_config:
type: str
description:
- Disk partitioning strategy
- If not specified, it will fallback to V(auto).
choices:
- auto
- manual
files:
type: dict
default: {}
description:
- 'Files to insert into the instance. Hash of C(remotepath: localpath)'
flavor:
type: str
description:
- flavor to use for the instance
required: true
image:
type: str
description:
- image to use for the instance. Can be an C(id), C(human_id) or C(name).
required: true
key_name:
type: str
description:
- key pair to use on the instance
loadbalancers:
type: list
elements: dict
description:
- List of load balancer C(id) and C(port) hashes
max_entities:
type: int
description:
- The maximum number of entities that are allowed in the scaling group.
Must be an integer between 0 and 1000.
required: true
meta:
type: dict
default: {}
description:
- A hash of metadata to associate with the instance
min_entities:
type: int
description:
- The minimum number of entities that are allowed in the scaling group.
Must be an integer between 0 and 1000.
required: true
name:
type: str
description:
- Name to give the scaling group
required: true
networks:
type: list
elements: str
description:
- The network to attach to the instances. If specified, you must include
ALL networks including the public and private interfaces. Can be C(id)
or C(label).
default:
- public
- private
server_name:
type: str
description:
- The base name for servers created by Autoscale
required: true
state:
type: str
description:
- Indicate desired state of the resource
choices:
- present
- absent
default: present
user_data:
type: str
description:
- Data to be uploaded to the servers config drive. This option implies
O(config_drive). Can be a file path or a string
wait:
description:
- wait for the scaling group to finish provisioning the minimum amount of
servers
type: bool
default: false
wait_timeout:
type: int
description:
- how long before wait gives up, in seconds
default: 300
author: "Matt Martz (@sivel)"
extends_documentation_fragment:
- community.general.rackspace
- community.general.rackspace.openstack
- community.general.attributes
'''
EXAMPLES = '''
---
- hosts: localhost
gather_facts: false
connection: local
tasks:
- community.general.rax_scaling_group:
credentials: ~/.raxpub
region: ORD
cooldown: 300
flavor: performance1-1
image: bb02b1a3-bc77-4d17-ab5b-421d89850fca
min_entities: 5
max_entities: 10
name: ASG Test
server_name: asgtest
loadbalancers:
- id: 228385
port: 80
register: asg
'''
import base64
import json
import os
import time
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.rax import (
rax_argument_spec, rax_find_image, rax_find_network,
rax_required_together, rax_to_dict, setup_rax_module,
rax_scaling_group_personality_file,
)
from ansible.module_utils.six import string_types
def rax_asg(module, cooldown=300, disk_config=None, files=None, flavor=None,
image=None, key_name=None, loadbalancers=None, meta=None,
min_entities=0, max_entities=0, name=None, networks=None,
server_name=None, state='present', user_data=None,
config_drive=False, wait=True, wait_timeout=300):
files = {} if files is None else files
loadbalancers = [] if loadbalancers is None else loadbalancers
meta = {} if meta is None else meta
networks = [] if networks is None else networks
changed = False
au = pyrax.autoscale
if not au:
module.fail_json(msg='Failed to instantiate clients. This '
'typically indicates an invalid region or an '
'incorrectly capitalized region name.')
if user_data:
config_drive = True
if user_data and os.path.isfile(user_data):
try:
f = open(user_data)
user_data = f.read()
f.close()
except Exception as e:
module.fail_json(msg='Failed to load %s' % user_data)
if state == 'present':
# Normalize and ensure all metadata values are strings
if meta:
for k, v in meta.items():
if isinstance(v, list):
meta[k] = ','.join(['%s' % i for i in v])
elif isinstance(v, dict):
meta[k] = json.dumps(v)
elif not isinstance(v, string_types):
meta[k] = '%s' % v
if image:
image = rax_find_image(module, pyrax, image)
nics = []
if networks:
for network in networks:
nics.extend(rax_find_network(module, pyrax, network))
for nic in nics:
# pyrax is currently returning net-id, but we need uuid
# this check makes this forward compatible for a time when
# pyrax uses uuid instead
if nic.get('net-id'):
nic.update(uuid=nic['net-id'])
del nic['net-id']
# Handle the file contents
personality = rax_scaling_group_personality_file(module, files)
lbs = []
if loadbalancers:
for lb in loadbalancers:
try:
lb_id = int(lb.get('id'))
except (ValueError, TypeError):
module.fail_json(msg='Load balancer ID is not an integer: '
'%s' % lb.get('id'))
try:
port = int(lb.get('port'))
except (ValueError, TypeError):
module.fail_json(msg='Load balancer port is not an '
'integer: %s' % lb.get('port'))
if not lb_id or not port:
continue
lbs.append((lb_id, port))
try:
sg = au.find(name=name)
except pyrax.exceptions.NoUniqueMatch as e:
module.fail_json(msg='%s' % e.message)
except pyrax.exceptions.NotFound:
try:
sg = au.create(name, cooldown=cooldown,
min_entities=min_entities,
max_entities=max_entities,
launch_config_type='launch_server',
server_name=server_name, image=image,
flavor=flavor, disk_config=disk_config,
metadata=meta, personality=personality,
networks=nics, load_balancers=lbs,
key_name=key_name, config_drive=config_drive,
user_data=user_data)
changed = True
except Exception as e:
module.fail_json(msg='%s' % e.message)
if not changed:
# Scaling Group Updates
group_args = {}
if cooldown != sg.cooldown:
group_args['cooldown'] = cooldown
if min_entities != sg.min_entities:
group_args['min_entities'] = min_entities
if max_entities != sg.max_entities:
group_args['max_entities'] = max_entities
if group_args:
changed = True
sg.update(**group_args)
# Launch Configuration Updates
lc = sg.get_launch_config()
lc_args = {}
if server_name != lc.get('name'):
lc_args['server_name'] = server_name
if image != lc.get('image'):
lc_args['image'] = image
if flavor != lc.get('flavor'):
lc_args['flavor'] = flavor
disk_config = disk_config or 'AUTO'
if ((disk_config or lc.get('disk_config')) and
disk_config != lc.get('disk_config', 'AUTO')):
lc_args['disk_config'] = disk_config
if (meta or lc.get('meta')) and meta != lc.get('metadata'):
lc_args['metadata'] = meta
test_personality = []
for p in personality:
test_personality.append({
'path': p['path'],
'contents': base64.b64encode(p['contents'])
})
if ((test_personality or lc.get('personality')) and
test_personality != lc.get('personality')):
lc_args['personality'] = personality
if nics != lc.get('networks'):
lc_args['networks'] = nics
if lbs != lc.get('load_balancers'):
# Work around for https://github.com/rackspace/pyrax/pull/393
lc_args['load_balancers'] = sg.manager._resolve_lbs(lbs)
if key_name != lc.get('key_name'):
lc_args['key_name'] = key_name
if config_drive != lc.get('config_drive', False):
lc_args['config_drive'] = config_drive
if (user_data and
base64.b64encode(user_data) != lc.get('user_data')):
lc_args['user_data'] = user_data
if lc_args:
# Work around for https://github.com/rackspace/pyrax/pull/389
if 'flavor' not in lc_args:
lc_args['flavor'] = lc.get('flavor')
changed = True
sg.update_launch_config(**lc_args)
sg.get()
if wait:
end_time = time.time() + wait_timeout
infinite = wait_timeout == 0
while infinite or time.time() < end_time:
state = sg.get_state()
if state["pending_capacity"] == 0:
break
time.sleep(5)
module.exit_json(changed=changed, autoscale_group=rax_to_dict(sg))
else:
try:
sg = au.find(name=name)
sg.delete()
changed = True
except pyrax.exceptions.NotFound as e:
sg = {}
except Exception as e:
module.fail_json(msg='%s' % e.message)
module.exit_json(changed=changed, autoscale_group=rax_to_dict(sg))
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
config_drive=dict(default=False, type='bool'),
cooldown=dict(type='int', default=300),
disk_config=dict(choices=['auto', 'manual']),
files=dict(type='dict', default={}),
flavor=dict(required=True),
image=dict(required=True),
key_name=dict(),
loadbalancers=dict(type='list', elements='dict'),
meta=dict(type='dict', default={}),
min_entities=dict(type='int', required=True),
max_entities=dict(type='int', required=True),
name=dict(required=True),
networks=dict(type='list', elements='str', default=['public', 'private']),
server_name=dict(required=True),
state=dict(default='present', choices=['present', 'absent']),
user_data=dict(no_log=True),
wait=dict(default=False, type='bool'),
wait_timeout=dict(default=300, type='int'),
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together(),
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
config_drive = module.params.get('config_drive')
cooldown = module.params.get('cooldown')
disk_config = module.params.get('disk_config')
if disk_config:
disk_config = disk_config.upper()
files = module.params.get('files')
flavor = module.params.get('flavor')
image = module.params.get('image')
key_name = module.params.get('key_name')
loadbalancers = module.params.get('loadbalancers')
meta = module.params.get('meta')
min_entities = module.params.get('min_entities')
max_entities = module.params.get('max_entities')
name = module.params.get('name')
networks = module.params.get('networks')
server_name = module.params.get('server_name')
state = module.params.get('state')
user_data = module.params.get('user_data')
if not 0 <= min_entities <= 1000 or not 0 <= max_entities <= 1000:
module.fail_json(msg='min_entities and max_entities must be an '
'integer between 0 and 1000')
if not 0 <= cooldown <= 86400:
module.fail_json(msg='cooldown must be an integer between 0 and 86400')
setup_rax_module(module, pyrax)
rax_asg(module, cooldown=cooldown, disk_config=disk_config,
files=files, flavor=flavor, image=image, meta=meta,
key_name=key_name, loadbalancers=loadbalancers,
min_entities=min_entities, max_entities=max_entities,
name=name, networks=networks, server_name=server_name,
state=state, config_drive=config_drive, user_data=user_data)
if __name__ == '__main__':
main()

View file

@ -1,294 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: rax_scaling_policy
short_description: Manipulate Rackspace Cloud Autoscale Scaling Policy
description:
- Manipulate Rackspace Cloud Autoscale Scaling Policy
attributes:
check_mode:
support: none
diff_mode:
support: none
options:
at:
type: str
description:
- The UTC time when this policy will be executed. The time must be
formatted according to C(yyyy-MM-dd'T'HH:mm:ss.SSS) such as
V(2013-05-19T08:07:08Z)
change:
type: int
description:
- The change, either as a number of servers or as a percentage, to make
in the scaling group. If this is a percentage, you must set
O(is_percent) to V(true) also.
cron:
type: str
description:
- The time when the policy will be executed, as a cron entry. For
example, if this is parameter is set to V(1 0 * * *).
cooldown:
type: int
description:
- The period of time, in seconds, that must pass before any scaling can
occur after the previous scaling. Must be an integer between 0 and
86400 (24 hrs).
default: 300
desired_capacity:
type: int
description:
- The desired server capacity of the scaling the group; that is, how
many servers should be in the scaling group.
is_percent:
description:
- Whether the value in O(change) is a percent value
default: false
type: bool
name:
type: str
description:
- Name to give the policy
required: true
policy_type:
type: str
description:
- The type of policy that will be executed for the current release.
choices:
- webhook
- schedule
required: true
scaling_group:
type: str
description:
- Name of the scaling group that this policy will be added to
required: true
state:
type: str
description:
- Indicate desired state of the resource
choices:
- present
- absent
default: present
author: "Matt Martz (@sivel)"
extends_documentation_fragment:
- community.general.rackspace
- community.general.rackspace.openstack
- community.general.attributes
'''
EXAMPLES = '''
---
- hosts: localhost
gather_facts: false
connection: local
tasks:
- community.general.rax_scaling_policy:
credentials: ~/.raxpub
region: ORD
at: '2013-05-19T08:07:08Z'
change: 25
cooldown: 300
is_percent: true
name: ASG Test Policy - at
policy_type: schedule
scaling_group: ASG Test
register: asps_at
- community.general.rax_scaling_policy:
credentials: ~/.raxpub
region: ORD
cron: '1 0 * * *'
change: 25
cooldown: 300
is_percent: true
name: ASG Test Policy - cron
policy_type: schedule
scaling_group: ASG Test
register: asp_cron
- community.general.rax_scaling_policy:
credentials: ~/.raxpub
region: ORD
cooldown: 300
desired_capacity: 5
name: ASG Test Policy - webhook
policy_type: webhook
scaling_group: ASG Test
register: asp_webhook
'''
try:
import pyrax
HAS_PYRAX = True
except ImportError:
HAS_PYRAX = False
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.rax import (UUID, rax_argument_spec, rax_required_together, rax_to_dict,
setup_rax_module)
def rax_asp(module, at=None, change=0, cron=None, cooldown=300,
desired_capacity=0, is_percent=False, name=None,
policy_type=None, scaling_group=None, state='present'):
changed = False
au = pyrax.autoscale
if not au:
module.fail_json(msg='Failed to instantiate client. This '
'typically indicates an invalid region or an '
'incorrectly capitalized region name.')
try:
UUID(scaling_group)
except ValueError:
try:
sg = au.find(name=scaling_group)
except Exception as e:
module.fail_json(msg='%s' % e.message)
else:
try:
sg = au.get(scaling_group)
except Exception as e:
module.fail_json(msg='%s' % e.message)
if state == 'present':
policies = filter(lambda p: name == p.name, sg.list_policies())
if len(policies) > 1:
module.fail_json(msg='No unique policy match found by name')
if at:
args = dict(at=at)
elif cron:
args = dict(cron=cron)
else:
args = None
if not policies:
try:
policy = sg.add_policy(name, policy_type=policy_type,
cooldown=cooldown, change=change,
is_percent=is_percent,
desired_capacity=desired_capacity,
args=args)
changed = True
except Exception as e:
module.fail_json(msg='%s' % e.message)
else:
policy = policies[0]
kwargs = {}
if policy_type != policy.type:
kwargs['policy_type'] = policy_type
if cooldown != policy.cooldown:
kwargs['cooldown'] = cooldown
if hasattr(policy, 'change') and change != policy.change:
kwargs['change'] = change
if hasattr(policy, 'changePercent') and is_percent is False:
kwargs['change'] = change
kwargs['is_percent'] = False
elif hasattr(policy, 'change') and is_percent is True:
kwargs['change'] = change
kwargs['is_percent'] = True
if hasattr(policy, 'desiredCapacity') and change:
kwargs['change'] = change
elif ((hasattr(policy, 'change') or
hasattr(policy, 'changePercent')) and desired_capacity):
kwargs['desired_capacity'] = desired_capacity
if hasattr(policy, 'args') and args != policy.args:
kwargs['args'] = args
if kwargs:
policy.update(**kwargs)
changed = True
policy.get()
module.exit_json(changed=changed, autoscale_policy=rax_to_dict(policy))
else:
try:
policies = filter(lambda p: name == p.name, sg.list_policies())
if len(policies) > 1:
module.fail_json(msg='No unique policy match found by name')
elif not policies:
policy = {}
else:
policy.delete()
changed = True
except Exception as e:
module.fail_json(msg='%s' % e.message)
module.exit_json(changed=changed, autoscale_policy=rax_to_dict(policy))
def main():
argument_spec = rax_argument_spec()
argument_spec.update(
dict(
at=dict(),
change=dict(type='int'),
cron=dict(),
cooldown=dict(type='int', default=300),
desired_capacity=dict(type='int'),
is_percent=dict(type='bool', default=False),
name=dict(required=True),
policy_type=dict(required=True, choices=['webhook', 'schedule']),
scaling_group=dict(required=True),
state=dict(default='present', choices=['present', 'absent']),
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_together=rax_required_together(),
mutually_exclusive=[
['cron', 'at'],
['change', 'desired_capacity'],
]
)
if not HAS_PYRAX:
module.fail_json(msg='pyrax is required for this module')
at = module.params.get('at')
change = module.params.get('change')
cron = module.params.get('cron')
cooldown = module.params.get('cooldown')
desired_capacity = module.params.get('desired_capacity')
is_percent = module.params.get('is_percent')
name = module.params.get('name')
policy_type = module.params.get('policy_type')
scaling_group = module.params.get('scaling_group')
state = module.params.get('state')
if (at or cron) and policy_type == 'webhook':
module.fail_json(msg='policy_type=schedule is required for a time '
'based policy')
setup_rax_module(module, pyrax)
rax_asp(module, at=at, change=change, cron=cron, cooldown=cooldown,
desired_capacity=desired_capacity, is_percent=is_percent,
name=name, policy_type=policy_type, scaling_group=scaling_group,
state=state)
if __name__ == '__main__':
main()

View file

@ -109,9 +109,10 @@ options:
timeout: timeout:
description: description:
- Timeout in seconds for HTTP requests to OOB controller. - Timeout in seconds for HTTP requests to OOB controller.
- The default value for this param is C(10) but that is being deprecated - The default value for this parameter changed from V(10) to V(60)
and it will be replaced with C(60) in community.general 9.0.0. in community.general 9.0.0.
type: int type: int
default: 60
boot_override_mode: boot_override_mode:
description: description:
- Boot mode when using an override. - Boot mode when using an override.
@ -805,7 +806,7 @@ def main():
update_username=dict(type='str', aliases=["account_updatename"]), update_username=dict(type='str', aliases=["account_updatename"]),
account_properties=dict(type='dict', default={}), account_properties=dict(type='dict', default={}),
bootdevice=dict(), bootdevice=dict(),
timeout=dict(type='int'), timeout=dict(type='int', default=60),
uefi_target=dict(), uefi_target=dict(),
boot_next=dict(), boot_next=dict(),
boot_override_mode=dict(choices=['Legacy', 'UEFI']), boot_override_mode=dict(choices=['Legacy', 'UEFI']),
@ -854,16 +855,6 @@ def main():
supports_check_mode=False supports_check_mode=False
) )
if module.params['timeout'] is None:
timeout = 10
module.deprecate(
'The default value {0} for parameter param1 is being deprecated and it will be replaced by {1}'.format(
10, 60
),
version='9.0.0',
collection_name='community.general'
)
category = module.params['category'] category = module.params['category']
command_list = module.params['command'] command_list = module.params['command']

View file

@ -64,9 +64,10 @@ options:
timeout: timeout:
description: description:
- Timeout in seconds for HTTP requests to OOB controller. - Timeout in seconds for HTTP requests to OOB controller.
- The default value for this param is C(10) but that is being deprecated - The default value for this parameter changed from V(10) to V(60)
and it will be replaced with C(60) in community.general 9.0.0. in community.general 9.0.0.
type: int type: int
default: 60
boot_order: boot_order:
required: false required: false
description: description:
@ -384,7 +385,7 @@ def main():
password=dict(no_log=True), password=dict(no_log=True),
auth_token=dict(no_log=True), auth_token=dict(no_log=True),
bios_attributes=dict(type='dict', default={}), bios_attributes=dict(type='dict', default={}),
timeout=dict(type='int'), timeout=dict(type='int', default=60),
boot_order=dict(type='list', elements='str', default=[]), boot_order=dict(type='list', elements='str', default=[]),
network_protocols=dict( network_protocols=dict(
type='dict', type='dict',
@ -418,16 +419,6 @@ def main():
supports_check_mode=False supports_check_mode=False
) )
if module.params['timeout'] is None:
timeout = 10
module.deprecate(
'The default value {0} for parameter param1 is being deprecated and it will be replaced by {1}'.format(
10, 60
),
version='9.0.0',
collection_name='community.general'
)
category = module.params['category'] category = module.params['category']
command_list = module.params['command'] command_list = module.params['command']

View file

@ -63,9 +63,10 @@ options:
timeout: timeout:
description: description:
- Timeout in seconds for HTTP requests to OOB controller. - Timeout in seconds for HTTP requests to OOB controller.
- The default value for this param is C(10) but that is being deprecated - The default value for this parameter changed from V(10) to V(60)
and it will be replaced with C(60) in community.general 9.0.0. in community.general 9.0.0.
type: int type: int
default: 60
update_handle: update_handle:
required: false required: false
description: description:
@ -407,7 +408,7 @@ def main():
username=dict(), username=dict(),
password=dict(no_log=True), password=dict(no_log=True),
auth_token=dict(no_log=True), auth_token=dict(no_log=True),
timeout=dict(type='int'), timeout=dict(type='int', default=60),
update_handle=dict(), update_handle=dict(),
manager=dict(), manager=dict(),
), ),
@ -423,16 +424,6 @@ def main():
supports_check_mode=True, supports_check_mode=True,
) )
if module.params['timeout'] is None:
timeout = 10
module.deprecate(
'The default value {0} for parameter param1 is being deprecated and it will be replaced by {1}'.format(
10, 60
),
version='9.0.0',
collection_name='community.general'
)
# admin credentials used for authentication # admin credentials used for authentication
creds = {'user': module.params['username'], creds = {'user': module.params['username'],
'pswd': module.params['password'], 'pswd': module.params['password'],

View file

@ -123,10 +123,9 @@ options:
description: description:
- Upon successful registration, auto-consume available subscriptions - Upon successful registration, auto-consume available subscriptions
- | - |
Please note that the alias O(autosubscribe) will be removed in Please note that the alias O(ignore:autosubscribe) was removed in
community.general 9.0.0. community.general 9.0.0.
type: bool type: bool
aliases: [autosubscribe]
activationkey: activationkey:
description: description:
- supply an activation key for use with registration - supply an activation key for use with registration
@ -1106,17 +1105,7 @@ def main():
'server_port': {}, 'server_port': {},
'rhsm_baseurl': {}, 'rhsm_baseurl': {},
'rhsm_repo_ca_cert': {}, 'rhsm_repo_ca_cert': {},
'auto_attach': { 'auto_attach': {'type': 'bool'},
'type': 'bool',
'aliases': ['autosubscribe'],
'deprecated_aliases': [
{
'name': 'autosubscribe',
'version': '9.0.0',
'collection_name': 'community.general',
},
],
},
'activationkey': {'no_log': True}, 'activationkey': {'no_log': True},
'org_id': {}, 'org_id': {},
'environment': {}, 'environment': {},

View file

@ -1,228 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
deprecated:
removed_in: 9.0.0
why: the endpoints this module relies on do not exist any more and do not resolve to IPs in DNS.
alternative: no known alternative at this point
module: stackdriver
short_description: Send code deploy and annotation events to stackdriver
description:
- Send code deploy and annotation events to Stackdriver
author: "Ben Whaley (@bwhaley)"
extends_documentation_fragment:
- community.general.attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
options:
key:
type: str
description:
- API key.
required: true
event:
type: str
description:
- The type of event to send, either annotation or deploy
choices: ['annotation', 'deploy']
required: true
revision_id:
type: str
description:
- The revision of the code that was deployed. Required for deploy events
deployed_by:
type: str
description:
- The person or robot responsible for deploying the code
default: "Ansible"
deployed_to:
type: str
description:
- "The environment code was deployed to. (ie: development, staging, production)"
repository:
type: str
description:
- The repository (or project) deployed
msg:
type: str
description:
- The contents of the annotation message, in plain text. Limited to 256 characters. Required for annotation.
annotated_by:
type: str
description:
- The person or robot who the annotation should be attributed to.
default: "Ansible"
level:
type: str
description:
- one of INFO/WARN/ERROR, defaults to INFO if not supplied. May affect display.
choices: ['INFO', 'WARN', 'ERROR']
default: 'INFO'
instance_id:
type: str
description:
- id of an EC2 instance that this event should be attached to, which will limit the contexts where this event is shown
event_epoch:
type: str
description:
- "Unix timestamp of where the event should appear in the timeline, defaults to now. Be careful with this."
'''
EXAMPLES = '''
- name: Send a code deploy event to stackdriver
community.general.stackdriver:
key: AAAAAA
event: deploy
deployed_to: production
deployed_by: leeroyjenkins
repository: MyWebApp
revision_id: abcd123
- name: Send an annotation event to stackdriver
community.general.stackdriver:
key: AAAAAA
event: annotation
msg: Greetings from Ansible
annotated_by: leeroyjenkins
level: WARN
instance_id: i-abcd1234
'''
# ===========================================
# Stackdriver module specific support methods.
#
import json
import traceback
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.text.converters import to_native
from ansible.module_utils.urls import fetch_url
def send_deploy_event(module, key, revision_id, deployed_by='Ansible', deployed_to=None, repository=None):
"""Send a deploy event to Stackdriver"""
deploy_api = "https://event-gateway.stackdriver.com/v1/deployevent"
params = {}
params['revision_id'] = revision_id
params['deployed_by'] = deployed_by
if deployed_to:
params['deployed_to'] = deployed_to
if repository:
params['repository'] = repository
return do_send_request(module, deploy_api, params, key)
def send_annotation_event(module, key, msg, annotated_by='Ansible', level=None, instance_id=None, event_epoch=None):
"""Send an annotation event to Stackdriver"""
annotation_api = "https://event-gateway.stackdriver.com/v1/annotationevent"
params = {}
params['message'] = msg
if annotated_by:
params['annotated_by'] = annotated_by
if level:
params['level'] = level
if instance_id:
params['instance_id'] = instance_id
if event_epoch:
params['event_epoch'] = event_epoch
return do_send_request(module, annotation_api, params, key)
def do_send_request(module, url, params, key):
data = json.dumps(params)
headers = {
'Content-Type': 'application/json',
'x-stackdriver-apikey': key
}
response, info = fetch_url(module, url, headers=headers, data=data, method='POST')
if info['status'] != 200:
module.fail_json(msg="Unable to send msg: %s" % info['msg'])
# ===========================================
# Module execution.
#
def main():
module = AnsibleModule(
argument_spec=dict( # @TODO add types
key=dict(required=True, no_log=True),
event=dict(required=True, choices=['deploy', 'annotation']),
msg=dict(),
revision_id=dict(),
annotated_by=dict(default='Ansible'),
level=dict(default='INFO', choices=['INFO', 'WARN', 'ERROR']),
instance_id=dict(),
event_epoch=dict(), # @TODO int?
deployed_by=dict(default='Ansible'),
deployed_to=dict(),
repository=dict(),
),
supports_check_mode=True
)
key = module.params["key"]
event = module.params["event"]
# Annotation params
msg = module.params["msg"]
annotated_by = module.params["annotated_by"]
level = module.params["level"]
instance_id = module.params["instance_id"]
event_epoch = module.params["event_epoch"]
# Deploy params
revision_id = module.params["revision_id"]
deployed_by = module.params["deployed_by"]
deployed_to = module.params["deployed_to"]
repository = module.params["repository"]
##################################################################
# deploy requires revision_id
# annotation requires msg
# We verify these manually
##################################################################
if event == 'deploy':
if not revision_id:
module.fail_json(msg="revision_id required for deploy events")
try:
send_deploy_event(module, key, revision_id, deployed_by, deployed_to, repository)
except Exception as e:
module.fail_json(msg="unable to sent deploy event: %s" % to_native(e),
exception=traceback.format_exc())
if event == 'annotation':
if not msg:
module.fail_json(msg="msg required for annotation events")
try:
send_annotation_event(module, key, msg, annotated_by, level, instance_id, event_epoch)
except Exception as e:
module.fail_json(msg="unable to sent annotation event: %s" % to_native(e),
exception=traceback.format_exc())
changed = True
module.exit_json(changed=changed, deployed_by=deployed_by)
if __name__ == '__main__':
main()

View file

@ -1,213 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2015, Quentin Stafford-Fraser, with contributions gratefully acknowledged from:
# * Andy Baker
# * Federico Tarantini
#
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# Create a Webfaction application using Ansible and the Webfaction API
#
# Valid application types can be found by looking here:
# https://docs.webfaction.com/xmlrpc-api/apps.html#application-types
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
deprecated:
removed_in: 9.0.0
why: the endpoints this module relies on do not exist any more and do not resolve to IPs in DNS.
alternative: no known alternative at this point
module: webfaction_app
short_description: Add or remove applications on a Webfaction host
description:
- Add or remove applications on a Webfaction host. Further documentation at U(https://github.com/quentinsf/ansible-webfaction).
author: Quentin Stafford-Fraser (@quentinsf)
notes:
- >
You can run playbooks that use this on a local machine, or on a Webfaction host, or elsewhere, since the scripts use the remote webfaction API.
The location is not important. However, running them on multiple hosts I(simultaneously) is best avoided. If you do not specify C(localhost) as
your host, you may want to add C(serial=1) to the plays.
- See L(the webfaction API, https://docs.webfaction.com/xmlrpc-api/) for more info.
extends_documentation_fragment:
- community.general.attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
options:
name:
description:
- The name of the application
required: true
type: str
state:
description:
- Whether the application should exist
choices: ['present', 'absent']
default: "present"
type: str
type:
description:
- The type of application to create. See the Webfaction docs at U(https://docs.webfaction.com/xmlrpc-api/apps.html) for a list.
required: true
type: str
autostart:
description:
- Whether the app should restart with an C(autostart.cgi) script
type: bool
default: false
extra_info:
description:
- Any extra parameters required by the app
default: ''
type: str
port_open:
description:
- IF the port should be opened
type: bool
default: false
login_name:
description:
- The webfaction account to use
required: true
type: str
login_password:
description:
- The webfaction password to use
required: true
type: str
machine:
description:
- The machine name to use (optional for accounts with only one machine)
type: str
'''
EXAMPLES = '''
- name: Create a test app
community.general.webfaction_app:
name: "my_wsgi_app1"
state: present
type: mod_wsgi35-python27
login_name: "{{webfaction_user}}"
login_password: "{{webfaction_passwd}}"
machine: "{{webfaction_machine}}"
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six.moves import xmlrpc_client
webfaction = xmlrpc_client.ServerProxy('https://api.webfaction.com/')
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(required=True),
state=dict(choices=['present', 'absent'], default='present'),
type=dict(required=True),
autostart=dict(type='bool', default=False),
extra_info=dict(default=""),
port_open=dict(type='bool', default=False),
login_name=dict(required=True),
login_password=dict(required=True, no_log=True),
machine=dict(),
),
supports_check_mode=True
)
app_name = module.params['name']
app_type = module.params['type']
app_state = module.params['state']
if module.params['machine']:
session_id, account = webfaction.login(
module.params['login_name'],
module.params['login_password'],
module.params['machine']
)
else:
session_id, account = webfaction.login(
module.params['login_name'],
module.params['login_password']
)
app_list = webfaction.list_apps(session_id)
app_map = dict([(i['name'], i) for i in app_list])
existing_app = app_map.get(app_name)
result = {}
# Here's where the real stuff happens
if app_state == 'present':
# Does an app with this name already exist?
if existing_app:
if existing_app['type'] != app_type:
module.fail_json(msg="App already exists with different type. Please fix by hand.")
# If it exists with the right type, we don't change it
# Should check other parameters.
module.exit_json(
changed=False,
result=existing_app,
)
if not module.check_mode:
# If this isn't a dry run, create the app
result.update(
webfaction.create_app(
session_id, app_name, app_type,
module.boolean(module.params['autostart']),
module.params['extra_info'],
module.boolean(module.params['port_open'])
)
)
elif app_state == 'absent':
# If the app's already not there, nothing changed.
if not existing_app:
module.exit_json(
changed=False,
)
if not module.check_mode:
# If this isn't a dry run, delete the app
result.update(
webfaction.delete_app(session_id, app_name)
)
else:
module.fail_json(msg="Unknown state specified: {0}".format(app_state))
module.exit_json(
changed=True,
result=result
)
if __name__ == '__main__':
main()

View file

@ -1,209 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2015, Quentin Stafford-Fraser, with contributions gratefully acknowledged from:
# * Andy Baker
# * Federico Tarantini
#
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# Create a webfaction database using Ansible and the Webfaction API
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
deprecated:
removed_in: 9.0.0
why: the endpoints this module relies on do not exist any more and do not resolve to IPs in DNS.
alternative: no known alternative at this point
module: webfaction_db
short_description: Add or remove a database on Webfaction
description:
- Add or remove a database on a Webfaction host. Further documentation at https://github.com/quentinsf/ansible-webfaction.
author: Quentin Stafford-Fraser (@quentinsf)
notes:
- >
You can run playbooks that use this on a local machine, or on a Webfaction host, or elsewhere, since the scripts use the remote webfaction API.
The location is not important. However, running them on multiple hosts I(simultaneously) is best avoided. If you do not specify C(localhost) as
your host, you may want to add C(serial=1) to the plays.
- See L(the webfaction API, https://docs.webfaction.com/xmlrpc-api/) for more info.
extends_documentation_fragment:
- community.general.attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
options:
name:
description:
- The name of the database
required: true
type: str
state:
description:
- Whether the database should exist
choices: ['present', 'absent']
default: "present"
type: str
type:
description:
- The type of database to create.
required: true
choices: ['mysql', 'postgresql']
type: str
password:
description:
- The password for the new database user.
type: str
login_name:
description:
- The webfaction account to use
required: true
type: str
login_password:
description:
- The webfaction password to use
required: true
type: str
machine:
description:
- The machine name to use (optional for accounts with only one machine)
type: str
'''
EXAMPLES = '''
# This will also create a default DB user with the same
# name as the database, and the specified password.
- name: Create a database
community.general.webfaction_db:
name: "{{webfaction_user}}_db1"
password: mytestsql
type: mysql
login_name: "{{webfaction_user}}"
login_password: "{{webfaction_passwd}}"
machine: "{{webfaction_machine}}"
# Note that, for symmetry's sake, deleting a database using
# 'state: absent' will also delete the matching user.
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six.moves import xmlrpc_client
webfaction = xmlrpc_client.ServerProxy('https://api.webfaction.com/')
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(required=True),
state=dict(choices=['present', 'absent'], default='present'),
# You can specify an IP address or hostname.
type=dict(required=True, choices=['mysql', 'postgresql']),
password=dict(no_log=True),
login_name=dict(required=True),
login_password=dict(required=True, no_log=True),
machine=dict(),
),
supports_check_mode=True
)
db_name = module.params['name']
db_state = module.params['state']
db_type = module.params['type']
db_passwd = module.params['password']
if module.params['machine']:
session_id, account = webfaction.login(
module.params['login_name'],
module.params['login_password'],
module.params['machine']
)
else:
session_id, account = webfaction.login(
module.params['login_name'],
module.params['login_password']
)
db_list = webfaction.list_dbs(session_id)
db_map = dict([(i['name'], i) for i in db_list])
existing_db = db_map.get(db_name)
user_list = webfaction.list_db_users(session_id)
user_map = dict([(i['username'], i) for i in user_list])
existing_user = user_map.get(db_name)
result = {}
# Here's where the real stuff happens
if db_state == 'present':
# Does a database with this name already exist?
if existing_db:
# Yes, but of a different type - fail
if existing_db['db_type'] != db_type:
module.fail_json(msg="Database already exists but is a different type. Please fix by hand.")
# If it exists with the right type, we don't change anything.
module.exit_json(
changed=False,
)
if not module.check_mode:
# If this isn't a dry run, create the db
# and default user.
result.update(
webfaction.create_db(
session_id, db_name, db_type, db_passwd
)
)
elif db_state == 'absent':
# If this isn't a dry run...
if not module.check_mode:
if not (existing_db or existing_user):
module.exit_json(changed=False,)
if existing_db:
# Delete the db if it exists
result.update(
webfaction.delete_db(session_id, db_name, db_type)
)
if existing_user:
# Delete the default db user if it exists
result.update(
webfaction.delete_db_user(session_id, db_name, db_type)
)
else:
module.fail_json(msg="Unknown state specified: {0}".format(db_state))
module.exit_json(
changed=True,
result=result
)
if __name__ == '__main__':
main()

View file

@ -1,184 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2015, Quentin Stafford-Fraser
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# Create Webfaction domains and subdomains using Ansible and the Webfaction API
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
deprecated:
removed_in: 9.0.0
why: the endpoints this module relies on do not exist any more and do not resolve to IPs in DNS.
alternative: no known alternative at this point
module: webfaction_domain
short_description: Add or remove domains and subdomains on Webfaction
description:
- Add or remove domains or subdomains on a Webfaction host. Further documentation at https://github.com/quentinsf/ansible-webfaction.
author: Quentin Stafford-Fraser (@quentinsf)
notes:
- If you are I(deleting) domains by using O(state=absent), then note that if you specify subdomains, just those particular subdomains will be deleted.
If you do not specify subdomains, the domain will be deleted.
- >
You can run playbooks that use this on a local machine, or on a Webfaction host, or elsewhere, since the scripts use the remote webfaction API.
The location is not important. However, running them on multiple hosts I(simultaneously) is best avoided. If you do not specify C(localhost) as
your host, you may want to add C(serial=1) to the plays.
- See L(the webfaction API, https://docs.webfaction.com/xmlrpc-api/) for more info.
extends_documentation_fragment:
- community.general.attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
options:
name:
description:
- The name of the domain
required: true
type: str
state:
description:
- Whether the domain should exist
choices: ['present', 'absent']
default: "present"
type: str
subdomains:
description:
- Any subdomains to create.
default: []
type: list
elements: str
login_name:
description:
- The webfaction account to use
required: true
type: str
login_password:
description:
- The webfaction password to use
required: true
type: str
'''
EXAMPLES = '''
- name: Create a test domain
community.general.webfaction_domain:
name: mydomain.com
state: present
subdomains:
- www
- blog
login_name: "{{webfaction_user}}"
login_password: "{{webfaction_passwd}}"
- name: Delete test domain and any subdomains
community.general.webfaction_domain:
name: mydomain.com
state: absent
login_name: "{{webfaction_user}}"
login_password: "{{webfaction_passwd}}"
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six.moves import xmlrpc_client
webfaction = xmlrpc_client.ServerProxy('https://api.webfaction.com/')
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(required=True),
state=dict(choices=['present', 'absent'], default='present'),
subdomains=dict(default=[], type='list', elements='str'),
login_name=dict(required=True),
login_password=dict(required=True, no_log=True),
),
supports_check_mode=True
)
domain_name = module.params['name']
domain_state = module.params['state']
domain_subdomains = module.params['subdomains']
session_id, account = webfaction.login(
module.params['login_name'],
module.params['login_password']
)
domain_list = webfaction.list_domains(session_id)
domain_map = dict([(i['domain'], i) for i in domain_list])
existing_domain = domain_map.get(domain_name)
result = {}
# Here's where the real stuff happens
if domain_state == 'present':
# Does an app with this name already exist?
if existing_domain:
if set(existing_domain['subdomains']) >= set(domain_subdomains):
# If it exists with the right subdomains, we don't change anything.
module.exit_json(
changed=False,
)
positional_args = [session_id, domain_name] + domain_subdomains
if not module.check_mode:
# If this isn't a dry run, create the app
# print positional_args
result.update(
webfaction.create_domain(
*positional_args
)
)
elif domain_state == 'absent':
# If the app's already not there, nothing changed.
if not existing_domain:
module.exit_json(
changed=False,
)
positional_args = [session_id, domain_name] + domain_subdomains
if not module.check_mode:
# If this isn't a dry run, delete the app
result.update(
webfaction.delete_domain(*positional_args)
)
else:
module.fail_json(msg="Unknown state specified: {0}".format(domain_state))
module.exit_json(
changed=True,
result=result
)
if __name__ == '__main__':
main()

View file

@ -1,152 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2015, Quentin Stafford-Fraser and Andy Baker
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# Create webfaction mailbox using Ansible and the Webfaction API
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
deprecated:
removed_in: 9.0.0
why: the endpoints this module relies on do not exist any more and do not resolve to IPs in DNS.
alternative: no known alternative at this point
module: webfaction_mailbox
short_description: Add or remove mailboxes on Webfaction
description:
- Add or remove mailboxes on a Webfaction account. Further documentation at https://github.com/quentinsf/ansible-webfaction.
author: Quentin Stafford-Fraser (@quentinsf)
notes:
- >
You can run playbooks that use this on a local machine, or on a Webfaction host, or elsewhere, since the scripts use the remote webfaction API.
The location is not important. However, running them on multiple hosts I(simultaneously) is best avoided. If you do not specify C(localhost) as
your host, you may want to add C(serial=1) to the plays.
- See L(the webfaction API, https://docs.webfaction.com/xmlrpc-api/) for more info.
extends_documentation_fragment:
- community.general.attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
options:
mailbox_name:
description:
- The name of the mailbox
required: true
type: str
mailbox_password:
description:
- The password for the mailbox
required: true
type: str
state:
description:
- Whether the mailbox should exist
choices: ['present', 'absent']
default: "present"
type: str
login_name:
description:
- The webfaction account to use
required: true
type: str
login_password:
description:
- The webfaction password to use
required: true
type: str
'''
EXAMPLES = '''
- name: Create a mailbox
community.general.webfaction_mailbox:
mailbox_name="mybox"
mailbox_password="myboxpw"
state=present
login_name={{webfaction_user}}
login_password={{webfaction_passwd}}
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six.moves import xmlrpc_client
webfaction = xmlrpc_client.ServerProxy('https://api.webfaction.com/')
def main():
module = AnsibleModule(
argument_spec=dict(
mailbox_name=dict(required=True),
mailbox_password=dict(required=True, no_log=True),
state=dict(required=False, choices=['present', 'absent'], default='present'),
login_name=dict(required=True),
login_password=dict(required=True, no_log=True),
),
supports_check_mode=True
)
mailbox_name = module.params['mailbox_name']
site_state = module.params['state']
session_id, account = webfaction.login(
module.params['login_name'],
module.params['login_password']
)
mailbox_list = [x['mailbox'] for x in webfaction.list_mailboxes(session_id)]
existing_mailbox = mailbox_name in mailbox_list
result = {}
# Here's where the real stuff happens
if site_state == 'present':
# Does a mailbox with this name already exist?
if existing_mailbox:
module.exit_json(changed=False,)
positional_args = [session_id, mailbox_name]
if not module.check_mode:
# If this isn't a dry run, create the mailbox
result.update(webfaction.create_mailbox(*positional_args))
elif site_state == 'absent':
# If the mailbox is already not there, nothing changed.
if not existing_mailbox:
module.exit_json(changed=False)
if not module.check_mode:
# If this isn't a dry run, delete the mailbox
result.update(webfaction.delete_mailbox(session_id, mailbox_name))
else:
module.fail_json(msg="Unknown state specified: {0}".format(site_state))
module.exit_json(changed=True, result=result)
if __name__ == '__main__':
main()

View file

@ -1,223 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2015, Quentin Stafford-Fraser
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# Create Webfaction website using Ansible and the Webfaction API
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
deprecated:
removed_in: 9.0.0
why: the endpoints this module relies on do not exist any more and do not resolve to IPs in DNS.
alternative: no known alternative at this point
module: webfaction_site
short_description: Add or remove a website on a Webfaction host
description:
- Add or remove a website on a Webfaction host. Further documentation at https://github.com/quentinsf/ansible-webfaction.
author: Quentin Stafford-Fraser (@quentinsf)
notes:
- Sadly, you I(do) need to know your webfaction hostname for the C(host) parameter. But at least, unlike the API, you do not need to know the IP
address. You can use a DNS name.
- If a site of the same name exists in the account but on a different host, the operation will exit.
- >
You can run playbooks that use this on a local machine, or on a Webfaction host, or elsewhere, since the scripts use the remote webfaction API.
The location is not important. However, running them on multiple hosts I(simultaneously) is best avoided. If you do not specify C(localhost) as
your host, you may want to add C(serial=1) to the plays.
- See L(the webfaction API, https://docs.webfaction.com/xmlrpc-api/) for more info.
extends_documentation_fragment:
- community.general.attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
options:
name:
description:
- The name of the website
required: true
type: str
state:
description:
- Whether the website should exist
choices: ['present', 'absent']
default: "present"
type: str
host:
description:
- The webfaction host on which the site should be created.
required: true
type: str
https:
description:
- Whether or not to use HTTPS
type: bool
default: false
site_apps:
description:
- A mapping of URLs to apps
default: []
type: list
elements: list
subdomains:
description:
- A list of subdomains associated with this site.
default: []
type: list
elements: str
login_name:
description:
- The webfaction account to use
required: true
type: str
login_password:
description:
- The webfaction password to use
required: true
type: str
'''
EXAMPLES = '''
- name: Create website
community.general.webfaction_site:
name: testsite1
state: present
host: myhost.webfaction.com
subdomains:
- 'testsite1.my_domain.org'
site_apps:
- ['testapp1', '/']
https: false
login_name: "{{webfaction_user}}"
login_password: "{{webfaction_passwd}}"
'''
import socket
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six.moves import xmlrpc_client
webfaction = xmlrpc_client.ServerProxy('https://api.webfaction.com/')
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(required=True),
state=dict(choices=['present', 'absent'], default='present'),
# You can specify an IP address or hostname.
host=dict(required=True),
https=dict(required=False, type='bool', default=False),
subdomains=dict(type='list', elements='str', default=[]),
site_apps=dict(type='list', elements='list', default=[]),
login_name=dict(required=True),
login_password=dict(required=True, no_log=True),
),
supports_check_mode=True
)
site_name = module.params['name']
site_state = module.params['state']
site_host = module.params['host']
site_ip = socket.gethostbyname(site_host)
session_id, account = webfaction.login(
module.params['login_name'],
module.params['login_password']
)
site_list = webfaction.list_websites(session_id)
site_map = dict([(i['name'], i) for i in site_list])
existing_site = site_map.get(site_name)
result = {}
# Here's where the real stuff happens
if site_state == 'present':
# Does a site with this name already exist?
if existing_site:
# If yes, but it's on a different IP address, then fail.
# If we wanted to allow relocation, we could add a 'relocate=true' option
# which would get the existing IP address, delete the site there, and create it
# at the new address. A bit dangerous, perhaps, so for now we'll require manual
# deletion if it's on another host.
if existing_site['ip'] != site_ip:
module.fail_json(msg="Website already exists with a different IP address. Please fix by hand.")
# If it's on this host and the key parameters are the same, nothing needs to be done.
if (existing_site['https'] == module.boolean(module.params['https'])) and \
(set(existing_site['subdomains']) == set(module.params['subdomains'])) and \
(dict(existing_site['website_apps']) == dict(module.params['site_apps'])):
module.exit_json(
changed=False
)
positional_args = [
session_id, site_name, site_ip,
module.boolean(module.params['https']),
module.params['subdomains'],
]
for a in module.params['site_apps']:
positional_args.append((a[0], a[1]))
if not module.check_mode:
# If this isn't a dry run, create or modify the site
result.update(
webfaction.create_website(
*positional_args
) if not existing_site else webfaction.update_website(
*positional_args
)
)
elif site_state == 'absent':
# If the site's already not there, nothing changed.
if not existing_site:
module.exit_json(
changed=False,
)
if not module.check_mode:
# If this isn't a dry run, delete the site
result.update(
webfaction.delete_website(session_id, site_name, site_ip)
)
else:
module.fail_json(msg="Unknown state specified: {0}".format(site_state))
module.exit_json(
changed=True,
result=result
)
if __name__ == '__main__':
main()

View file

@ -30,10 +30,10 @@ EXAMPLES = ""
RETURN = "" RETURN = ""
from ansible_collections.community.general.plugins.module_utils import deps
from ansible_collections.community.general.plugins.module_utils.module_helper import ModuleHelper from ansible_collections.community.general.plugins.module_utils.module_helper import ModuleHelper
from ansible.module_utils.basic import missing_required_lib
with ModuleHelper.dependency("nopackagewiththisname", missing_required_lib("nopackagewiththisname")): with deps.declare("nopackagewiththisname"):
import nopackagewiththisname # noqa: F401, pylint: disable=unused-import import nopackagewiththisname # noqa: F401, pylint: disable=unused-import
@ -50,6 +50,7 @@ class MSimple(ModuleHelper):
def __init_module__(self): def __init_module__(self):
self.vars.set('value', None) self.vars.set('value', None)
self.vars.set('abc', "abc", diff=True) self.vars.set('abc', "abc", diff=True)
deps.validate(self.module)
def __run__(self): def __run__(self):
if (0 if self.vars.a is None else self.vars.a) >= 100: if (0 if self.vars.a is None else self.vars.a) >= 100:

View file

@ -6,9 +6,6 @@ plugins/modules/iptables_state.py validate-modules:undocumented-parameter
plugins/modules/lxc_container.py validate-modules:use-run-command-not-popen plugins/modules/lxc_container.py validate-modules:use-run-command-not-popen
plugins/modules/osx_defaults.py validate-modules:parameter-state-invalid-choice plugins/modules/osx_defaults.py validate-modules:parameter-state-invalid-choice
plugins/modules/parted.py validate-modules:parameter-state-invalid-choice plugins/modules/parted.py validate-modules:parameter-state-invalid-choice
plugins/modules/rax_files_objects.py use-argspec-type-path # module deprecated - removed in 9.0.0
plugins/modules/rax_files.py validate-modules:parameter-state-invalid-choice # module deprecated - removed in 9.0.0
plugins/modules/rax.py use-argspec-type-path # module deprecated - removed in 9.0.0
plugins/modules/read_csv.py validate-modules:invalid-documentation plugins/modules/read_csv.py validate-modules:invalid-documentation
plugins/modules/rhevm.py validate-modules:parameter-state-invalid-choice plugins/modules/rhevm.py validate-modules:parameter-state-invalid-choice
plugins/modules/xfconf.py validate-modules:return-syntax-error plugins/modules/xfconf.py validate-modules:return-syntax-error

View file

@ -7,9 +7,6 @@ plugins/modules/iptables_state.py validate-modules:undocumented-parameter
plugins/modules/lxc_container.py validate-modules:use-run-command-not-popen plugins/modules/lxc_container.py validate-modules:use-run-command-not-popen
plugins/modules/osx_defaults.py validate-modules:parameter-state-invalid-choice plugins/modules/osx_defaults.py validate-modules:parameter-state-invalid-choice
plugins/modules/parted.py validate-modules:parameter-state-invalid-choice plugins/modules/parted.py validate-modules:parameter-state-invalid-choice
plugins/modules/rax_files_objects.py use-argspec-type-path # module deprecated - removed in 9.0.0
plugins/modules/rax_files.py validate-modules:parameter-state-invalid-choice # module deprecated - removed in 9.0.0
plugins/modules/rax.py use-argspec-type-path # module deprecated - removed in 9.0.0
plugins/modules/read_csv.py validate-modules:invalid-documentation plugins/modules/read_csv.py validate-modules:invalid-documentation
plugins/modules/rhevm.py validate-modules:parameter-state-invalid-choice plugins/modules/rhevm.py validate-modules:parameter-state-invalid-choice
plugins/modules/udm_user.py import-3.11 # Uses deprecated stdlib library 'crypt' plugins/modules/udm_user.py import-3.11 # Uses deprecated stdlib library 'crypt'

View file

@ -5,9 +5,6 @@ plugins/modules/iptables_state.py validate-modules:undocumented-parameter
plugins/modules/lxc_container.py validate-modules:use-run-command-not-popen plugins/modules/lxc_container.py validate-modules:use-run-command-not-popen
plugins/modules/osx_defaults.py validate-modules:parameter-state-invalid-choice plugins/modules/osx_defaults.py validate-modules:parameter-state-invalid-choice
plugins/modules/parted.py validate-modules:parameter-state-invalid-choice plugins/modules/parted.py validate-modules:parameter-state-invalid-choice
plugins/modules/rax_files_objects.py use-argspec-type-path # module deprecated - removed in 9.0.0
plugins/modules/rax_files.py validate-modules:parameter-state-invalid-choice # module deprecated - removed in 9.0.0
plugins/modules/rax.py use-argspec-type-path # module deprecated - removed in 9.0.0
plugins/modules/rhevm.py validate-modules:parameter-state-invalid-choice plugins/modules/rhevm.py validate-modules:parameter-state-invalid-choice
plugins/modules/udm_user.py import-3.11 # Uses deprecated stdlib library 'crypt' plugins/modules/udm_user.py import-3.11 # Uses deprecated stdlib library 'crypt'
plugins/modules/xfconf.py validate-modules:return-syntax-error plugins/modules/xfconf.py validate-modules:return-syntax-error

View file

@ -5,9 +5,6 @@ plugins/modules/iptables_state.py validate-modules:undocumented-parameter
plugins/modules/lxc_container.py validate-modules:use-run-command-not-popen plugins/modules/lxc_container.py validate-modules:use-run-command-not-popen
plugins/modules/osx_defaults.py validate-modules:parameter-state-invalid-choice plugins/modules/osx_defaults.py validate-modules:parameter-state-invalid-choice
plugins/modules/parted.py validate-modules:parameter-state-invalid-choice plugins/modules/parted.py validate-modules:parameter-state-invalid-choice
plugins/modules/rax_files_objects.py use-argspec-type-path # module deprecated - removed in 9.0.0
plugins/modules/rax_files.py validate-modules:parameter-state-invalid-choice # module deprecated - removed in 9.0.0
plugins/modules/rax.py use-argspec-type-path # module deprecated - removed in 9.0.0
plugins/modules/rhevm.py validate-modules:parameter-state-invalid-choice plugins/modules/rhevm.py validate-modules:parameter-state-invalid-choice
plugins/modules/udm_user.py import-3.11 # Uses deprecated stdlib library 'crypt' plugins/modules/udm_user.py import-3.11 # Uses deprecated stdlib library 'crypt'
plugins/modules/udm_user.py import-3.12 # Uses deprecated stdlib library 'crypt' plugins/modules/udm_user.py import-3.12 # Uses deprecated stdlib library 'crypt'

View file

@ -5,9 +5,6 @@ plugins/modules/iptables_state.py validate-modules:undocumented-parameter
plugins/modules/lxc_container.py validate-modules:use-run-command-not-popen plugins/modules/lxc_container.py validate-modules:use-run-command-not-popen
plugins/modules/osx_defaults.py validate-modules:parameter-state-invalid-choice plugins/modules/osx_defaults.py validate-modules:parameter-state-invalid-choice
plugins/modules/parted.py validate-modules:parameter-state-invalid-choice plugins/modules/parted.py validate-modules:parameter-state-invalid-choice
plugins/modules/rax_files_objects.py use-argspec-type-path # module deprecated - removed in 9.0.0
plugins/modules/rax_files.py validate-modules:parameter-state-invalid-choice # module deprecated - removed in 9.0.0
plugins/modules/rax.py use-argspec-type-path # module deprecated - removed in 9.0.0
plugins/modules/rhevm.py validate-modules:parameter-state-invalid-choice plugins/modules/rhevm.py validate-modules:parameter-state-invalid-choice
plugins/modules/udm_user.py import-3.11 # Uses deprecated stdlib library 'crypt' plugins/modules/udm_user.py import-3.11 # Uses deprecated stdlib library 'crypt'
plugins/modules/udm_user.py import-3.12 # Uses deprecated stdlib library 'crypt' plugins/modules/udm_user.py import-3.12 # Uses deprecated stdlib library 'crypt'

View file

@ -5,9 +5,6 @@ plugins/modules/iptables_state.py validate-modules:undocumented-parameter
plugins/modules/lxc_container.py validate-modules:use-run-command-not-popen plugins/modules/lxc_container.py validate-modules:use-run-command-not-popen
plugins/modules/osx_defaults.py validate-modules:parameter-state-invalid-choice plugins/modules/osx_defaults.py validate-modules:parameter-state-invalid-choice
plugins/modules/parted.py validate-modules:parameter-state-invalid-choice plugins/modules/parted.py validate-modules:parameter-state-invalid-choice
plugins/modules/rax_files_objects.py use-argspec-type-path # module deprecated - removed in 9.0.0
plugins/modules/rax_files.py validate-modules:parameter-state-invalid-choice # module deprecated - removed in 9.0.0
plugins/modules/rax.py use-argspec-type-path # module deprecated - removed in 9.0.0
plugins/modules/rhevm.py validate-modules:parameter-state-invalid-choice plugins/modules/rhevm.py validate-modules:parameter-state-invalid-choice
plugins/modules/udm_user.py import-3.11 # Uses deprecated stdlib library 'crypt' plugins/modules/udm_user.py import-3.11 # Uses deprecated stdlib library 'crypt'
plugins/modules/udm_user.py import-3.12 # Uses deprecated stdlib library 'crypt' plugins/modules/udm_user.py import-3.12 # Uses deprecated stdlib library 'crypt'

View file

@ -7,6 +7,7 @@
- id: install_dancer_compatibility - id: install_dancer_compatibility
input: input:
name: Dancer name: Dancer
mode: compatibility
output: output:
changed: true changed: true
run_command_calls: run_command_calls:
@ -23,6 +24,7 @@
- id: install_dancer_already_installed_compatibility - id: install_dancer_already_installed_compatibility
input: input:
name: Dancer name: Dancer
mode: compatibility
output: output:
changed: false changed: false
run_command_calls: run_command_calls:
@ -34,7 +36,6 @@
- id: install_dancer - id: install_dancer
input: input:
name: Dancer name: Dancer
mode: new
output: output:
changed: true changed: true
run_command_calls: run_command_calls:
@ -46,6 +47,7 @@
- id: install_distribution_file_compatibility - id: install_distribution_file_compatibility
input: input:
name: MIYAGAWA/Plack-0.99_05.tar.gz name: MIYAGAWA/Plack-0.99_05.tar.gz
mode: compatibility
output: output:
changed: true changed: true
run_command_calls: run_command_calls:
@ -57,7 +59,6 @@
- id: install_distribution_file - id: install_distribution_file
input: input:
name: MIYAGAWA/Plack-0.99_05.tar.gz name: MIYAGAWA/Plack-0.99_05.tar.gz
mode: new
output: output:
changed: true changed: true
run_command_calls: run_command_calls: