1
0
Fork 0
mirror of https://github.com/ansible-collections/community.general.git synced 2024-09-14 20:13:21 +02:00

Remove kubevirt and set up redirects to community.kubevirt (#1317)

* Remove kubevirt and set up redirects to community.kubevirt

This also removes the dependency on community.kubernetes which fixes
https://github.com/ansible-collections/community.general/issues/354.

* Update changelogs/fragments/1317-kubevirt-migration-removal.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelogs/fragments/1317-kubevirt-migration-removal.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Add missed redirects

Co-authored-by: Felix Fontein <felix@fontein.de>
This commit is contained in:
David Moreau Simard 2021-01-05 15:35:22 -05:00 committed by GitHub
parent ddaad1e650
commit e53f153e30
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
34 changed files with 35 additions and 3591 deletions

8
.github/BOTMETA.yml vendored
View file

@ -116,10 +116,6 @@ files:
$module_utils/ipa.py:
maintainers: $team_ipa
labels: ipa
$module_utils/kubevirt.py:
maintainers: $team_kubevirt
labels: cloud kubevirt
keywords: kubevirt
$module_utils/manageiq.py:
maintainers: $team_manageiq
labels: manageiq
@ -171,9 +167,6 @@ files:
$modules/cloud/huawei/:
maintainers: $team_huawei huaweicloud
keywords: cloud huawei hwc
$modules/cloud/kubevirt/:
maintainers: $team_kubevirt kubevirt
keywords: kubevirt
$modules/cloud/linode/:
maintainers: $team_linode
$modules/cloud/linode/linode.py:
@ -1008,7 +1001,6 @@ macros:
team_ipa: Akasurde Nosmoht fxfitz
team_jboss: Wolfant jairojunior wbrefvem
team_keycloak: eikef ndclt
team_kubevirt: machacekondra mmazur pkliczewski
team_linode: InTheCloudDan decentral1se displague rmcintosh
team_macos: Akasurde kyleabenson martinm82 danieljaouen indrajitr
team_manageiq: abellotti cben gtanzillo yaacov zgalor dkorn evertmulder

View file

@ -0,0 +1,13 @@
removed_features:
- |
All Kubevirt modules and plugins have now been migrated from community.general to the `community.kubevirt <https://galaxy.ansible.com/community/kubevirt>`_ Ansible collection.
If you use ansible-base 2.10 or newer, redirections have been provided.
If you use Ansible 2.9 and installed this collection, you need to adjust the FQCNs (``community.general.kubevirt_vm`` → ``community.kubevirt.kubevirt_vm``) and make sure to install the community.kubevirt collection.
breaking_changes:
- |
If you use Ansible 2.9 and the Kubevirt plugins or modules from this collection, community.general 2.0.0 results in errors when trying to use the Kubevirt content by FQCN, like ``community.general.kubevirt_vm``.
Since Ansible 2.9 is not able to use redirections, you will have to adjust your playbooks and roles manually to use the new FQCNs (``community.kubevirt.kubevirt_vm`` for the previous example) and to make sure that you have ``community.kubevirt`` installed.
If you use ansible-base 2.10 or newer and did not install Ansible 3.0.0, but installed (and/or upgraded) community.general manually, you need to make sure to also install the ``community.kubevirt`` collection if you are using any of the Kubevirt plugins or modules.
While ansible-base 2.10 or newer can use the redirects that community.general 2.0.0 adds, the collection they point to (such as community.google) must be installed for them to work.

View file

@ -7,9 +7,8 @@ authors:
description: null
license_file: COPYING
tags: [community]
# NOTE: No more dependencies can be added to this list
dependencies:
community.kubernetes: '>=1.0.0'
# NOTE: No dependencies are expected to be added here
# dependencies:
repository: https://github.com/ansible-collections/community.general
documentation: https://docs.ansible.com/ansible/latest/collections/community/general/
homepage: https://github.com/ansible-collections/community.general

View file

@ -1,13 +1,6 @@
---
requires_ansible: '>=2.9.10'
action_groups:
k8s:
- kubevirt_cdi_upload
- kubevirt_preset
- kubevirt_pvc
- kubevirt_rs
- kubevirt_template
- kubevirt_vm
ovirt:
- ovirt_affinity_label_facts
- ovirt_api_facts
@ -218,6 +211,18 @@ plugin_routing:
tombstone:
removal_version: 2.0.0
warning_text: Use the modules from the theforeman.foreman collection instead.
kubevirt_cdi_upload:
redirect: community.kubevirt.kubevirt_cdi_upload
kubevirt_preset:
redirect: community.kubevirt.kubevirt_preset
kubevirt_pvc:
redirect: community.kubevirt.kubevirt_pvc
kubevirt_rs:
redirect: community.kubevirt.kubevirt_rs
kubevirt_template:
redirect: community.kubevirt.kubevirt_template
kubevirt_vm:
redirect: community.kubevirt.kubevirt_vm
ldap_attr:
deprecation:
removal_version: 3.0.0
@ -553,6 +558,10 @@ plugin_routing:
redirect: community.docker.docker
hetzner:
redirect: community.hrobot.robot
kubevirt_common_options:
redirect: community.kubevirt.kubevirt_common_options
kubevirt_vm_options:
redirect: community.kubevirt.kubevirt_vm_options
postgresql:
redirect: community.postgresql.postgresql
module_utils:
@ -568,6 +577,8 @@ plugin_routing:
redirect: community.google.gcp
hetzner:
redirect: community.hrobot.robot
kubevirt:
redirect: community.kubevirt.kubevirt
postgresql:
redirect: community.postgresql.postgresql
callback:
@ -588,3 +599,5 @@ plugin_routing:
redirect: community.docker.docker_machine
docker_swarm:
redirect: community.docker.docker_swarm
kubevirt:
redirect: community.kubevirt.kubevirt

View file

@ -1,133 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, KubeVirt Team <@kubevirt>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
class ModuleDocFragment(object):
DOCUMENTATION = r'''
options:
resource_definition:
description:
- "A partial YAML definition of the object being created/updated. Here you can define Kubernetes
resource parameters not covered by this module's parameters."
- "NOTE: I(resource_definition) has lower priority than module parameters. If you try to define e.g.
I(metadata.namespace) here, that value will be ignored and I(namespace) used instead."
aliases:
- definition
- inline
type: dict
wait:
description:
- "I(True) if the module should wait for the resource to get into desired state."
type: bool
default: yes
force:
description:
- If set to C(no), and I(state) is C(present), an existing object will be replaced.
type: bool
default: no
wait_timeout:
description:
- The amount of time in seconds the module should wait for the resource to get into desired state.
type: int
default: 120
wait_sleep:
description:
- Number of seconds to sleep between checks.
default: 5
version_added: '0.2.0'
memory:
description:
- The amount of memory to be requested by virtual machine.
- For example 1024Mi.
type: str
memory_limit:
description:
- The maximum memory to be used by virtual machine.
- For example 1024Mi.
type: str
machine_type:
description:
- QEMU machine type is the actual chipset of the virtual machine.
type: str
merge_type:
description:
- Whether to override the default patch merge approach with a specific type.
- If more than one merge type is given, the merge types will be tried in order.
- "Defaults to C(['strategic-merge', 'merge']), which is ideal for using the same parameters
on resource kinds that combine Custom Resources and built-in resources, as
Custom Resource Definitions typically aren't updatable by the usual strategic merge."
- "See U(https://kubernetes.io/docs/tasks/run-application/update-api-object-kubectl-patch/#use-a-json-merge-patch-to-update-a-deployment)"
type: list
choices: [ json, merge, strategic-merge ]
cpu_shares:
description:
- "Specify CPU shares."
type: int
cpu_limit:
description:
- "Is converted to its millicore value and multiplied by 100. The resulting value is the total amount of CPU time that a container can use
every 100ms. A virtual machine cannot use more than its share of CPU time during this interval."
type: int
cpu_cores:
description:
- "Number of CPU cores."
type: int
cpu_model:
description:
- "CPU model."
- "You can check list of available models here: U(https://github.com/libvirt/libvirt/blob/master/src/cpu_map/index.xml)."
- "I(Note:) User can define default CPU model via as I(default-cpu-model) in I(kubevirt-config) I(ConfigMap), if not set I(host-model) is used."
- "I(Note:) Be sure that node CPU model where you run a VM, has the same or higher CPU family."
- "I(Note:) If CPU model wasn't defined, the VM will have CPU model closest to one that used on the node where the VM is running."
type: str
bootloader:
description:
- "Specify the bootloader of the virtual machine."
- "All virtual machines use BIOS by default for booting."
type: str
smbios_uuid:
description:
- "In order to provide a consistent view on the virtualized hardware for the guest OS, the SMBIOS UUID can be set."
type: str
cpu_features:
description:
- "List of dictionary to fine-tune features provided by the selected CPU model."
- "I(Note): Policy attribute can either be omitted or contain one of the following policies: force, require, optional, disable, forbid."
- "I(Note): In case a policy is omitted for a feature, it will default to require."
- "More information about policies: U(https://libvirt.org/formatdomain.html#elementsCPU)"
type: list
headless:
description:
- "Specify if the virtual machine should have attached a minimal Video and Graphics device configuration."
- "By default a minimal Video and Graphics device configuration will be applied to the VirtualMachineInstance. The video device is vga
compatible and comes with a memory size of 16 MB."
hugepage_size:
description:
- "Specify huge page size."
type: str
tablets:
description:
- "Specify tablets to be used as input devices"
type: list
hostname:
description:
- "Specifies the hostname of the virtual machine. The hostname will be set either by dhcp, cloud-init if configured or virtual machine
name will be used."
subdomain:
description:
- "If specified, the fully qualified virtual machine hostname will be hostname.subdomain.namespace.svc.cluster_domain. If not specified,
the virtual machine will not have a domain name at all. The DNS entry will resolve to the virtual machine, no matter if the virtual machine
itself can pick up a hostname."
requirements:
- python >= 2.7
- openshift >= 0.8.2
notes:
- "In order to use this module you have to install Openshift Python SDK.
To ensure it's installed with correct version you can create the following task:
I(pip: name=openshift>=0.8.2)"
'''

View file

@ -1,103 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, KubeVirt Team <@kubevirt>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
class ModuleDocFragment(object):
# Standard oVirt documentation fragment
DOCUMENTATION = r'''
options:
disks:
description:
- List of dictionaries which specify disks of the virtual machine.
- "A disk can be made accessible via four different types: I(disk), I(lun), I(cdrom), I(floppy)."
- "All possible configuration options are available in U(https://kubevirt.io/api-reference/master/definitions.html#_v1_disk)"
- Each disk must have specified a I(volume) that declares which volume type of the disk
All possible configuration options of volume are available in U(https://kubevirt.io/api-reference/master/definitions.html#_v1_volume).
type: list
labels:
description:
- Labels are key/value pairs that are attached to virtual machines. Labels are intended to be used to
specify identifying attributes of virtual machines that are meaningful and relevant to users, but do not directly
imply semantics to the core system. Labels can be used to organize and to select subsets of virtual machines.
Labels can be attached to virtual machines at creation time and subsequently added and modified at any time.
- More on labels that are used for internal implementation U(https://kubevirt.io/user-guide/#/misc/annotations_and_labels)
type: dict
interfaces:
description:
- An interface defines a virtual network interface of a virtual machine (also called a frontend).
- All possible configuration options interfaces are available in U(https://kubevirt.io/api-reference/master/definitions.html#_v1_interface)
- Each interface must have specified a I(network) that declares which logical or physical device it is connected to (also called as backend).
All possible configuration options of network are available in U(https://kubevirt.io/api-reference/master/definitions.html#_v1_network).
type: list
cloud_init_nocloud:
description:
- "Represents a cloud-init NoCloud user-data source. The NoCloud data will be added
as a disk to the virtual machine. A proper cloud-init installation is required inside the guest.
More information U(https://kubevirt.io/api-reference/master/definitions.html#_v1_cloudinitnocloudsource)"
type: dict
affinity:
description:
- "Describes node affinity scheduling rules for the vm."
type: dict
suboptions:
soft:
description:
- "The scheduler will prefer to schedule vms to nodes that satisfy the affinity expressions specified by this field, but it may choose a
node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for
each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute
a sum by iterating through the elements of this field and adding C(weight) to the sum if the node has vms which matches the corresponding
C(term); the nodes with the highest sum are the most preferred."
type: dict
hard:
description:
- "If the affinity requirements specified by this field are not met at scheduling time, the vm will not be scheduled onto the node. If
the affinity requirements specified by this field cease to be met at some point during vm execution (e.g. due to a vm label update), the
system may or may not try to eventually evict the vm from its node. When there are multiple elements, the lists of nodes corresponding to
each C(term) are intersected, i.e. all terms must be satisfied."
type: dict
node_affinity:
description:
- "Describes vm affinity scheduling rules e.g. co-locate this vm in the same node, zone, etc. as some other vms"
type: dict
suboptions:
soft:
description:
- "The scheduler will prefer to schedule vms to nodes that satisfy the affinity expressions specified by this field, but it may choose
a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e.
for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.),
compute a sum by iterating through the elements of this field and adding C(weight) to the sum if the node matches the corresponding
match_expressions; the nodes with the highest sum are the most preferred."
type: dict
hard:
description:
- "If the affinity requirements specified by this field are not met at scheduling time, the vm will not be scheduled onto the node. If
the affinity requirements specified by this field cease to be met at some point during vm execution (e.g. due to an update), the system
may or may not try to eventually evict the vm from its node."
type: dict
anti_affinity:
description:
- "Describes vm anti-affinity scheduling rules e.g. avoid putting this vm in the same node, zone, etc. as some other vms."
type: dict
suboptions:
soft:
description:
- "The scheduler will prefer to schedule vms to nodes that satisfy the anti-affinity expressions specified by this field, but it may
choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights,
i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions,
etc.), compute a sum by iterating through the elements of this field and adding C(weight) to the sum if the node has vms which matches
the corresponding C(term); the nodes with the highest sum are the most preferred."
type: dict
hard:
description:
- "If the anti-affinity requirements specified by this field are not met at scheduling time, the vm will not be scheduled onto the node.
If the anti-affinity requirements specified by this field cease to be met at some point during vm execution (e.g. due to a vm label
update), the system may or may not try to eventually evict the vm from its node. When there are multiple elements, the lists of nodes
corresponding to each C(term) are intersected, i.e. all terms must be satisfied."
type: dict
'''

View file

@ -1,256 +0,0 @@
# Copyright (c) 2018 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: kubevirt
plugin_type: inventory
author:
- KubeVirt Team (@kubevirt)
short_description: KubeVirt inventory source
extends_documentation_fragment:
- inventory_cache
- constructed
description:
- Fetch running VirtualMachines for one or more namespaces.
- Groups by namespace, namespace_vms and labels.
- Uses kubevirt.(yml|yaml) YAML configuration file to set parameter values.
options:
plugin:
description: token that ensures this is a source file for the 'kubevirt' plugin.
required: True
choices: ['kubevirt', 'community.general.kubevirt']
type: str
host_format:
description:
- Specify the format of the host in the inventory group.
default: "{namespace}-{name}-{uid}"
connections:
type: list
description:
- Optional list of cluster connection settings. If no connections are provided, the default
I(~/.kube/config) and active context will be used, and objects will be returned for all namespaces
the active user is authorized to access.
suboptions:
name:
description:
- Optional name to assign to the cluster. If not provided, a name is constructed from the server
and port.
type: str
kubeconfig:
description:
- Path to an existing Kubernetes config file. If not provided, and no other connection
options are provided, the OpenShift client will attempt to load the default
configuration file from I(~/.kube/config.json). Can also be specified via K8S_AUTH_KUBECONFIG
environment variable.
type: str
context:
description:
- The name of a context found in the config file. Can also be specified via K8S_AUTH_CONTEXT environment
variable.
type: str
host:
description:
- Provide a URL for accessing the API. Can also be specified via K8S_AUTH_HOST environment variable.
type: str
api_key:
description:
- Token used to authenticate with the API. Can also be specified via K8S_AUTH_API_KEY environment
variable.
type: str
username:
description:
- Provide a username for authenticating with the API. Can also be specified via K8S_AUTH_USERNAME
environment variable.
type: str
password:
description:
- Provide a password for authenticating with the API. Can also be specified via K8S_AUTH_PASSWORD
environment variable.
type: str
cert_file:
description:
- Path to a certificate used to authenticate with the API. Can also be specified via K8S_AUTH_CERT_FILE
environment variable.
type: str
key_file:
description:
- Path to a key file used to authenticate with the API. Can also be specified via K8S_AUTH_HOST
environment variable.
type: str
ssl_ca_cert:
description:
- Path to a CA certificate used to authenticate with the API. Can also be specified via
K8S_AUTH_SSL_CA_CERT environment variable.
type: str
verify_ssl:
description:
- "Whether or not to verify the API server's SSL certificates. Can also be specified via
K8S_AUTH_VERIFY_SSL environment variable."
type: bool
namespaces:
description:
- List of namespaces. If not specified, will fetch all virtual machines for all namespaces user is authorized
to access.
type: list
network_name:
description:
- In case of multiple network attached to virtual machine, define which interface should be returned as primary IP
address.
type: str
aliases: [ interface_name ]
api_version:
description:
- "Specify the KubeVirt API version."
type: str
annotation_variable:
description:
- "Specify the name of the annotation which provides data, which should be used as inventory host variables."
- "Note, that the value in ansible annotations should be json."
type: str
default: 'ansible'
requirements:
- "openshift >= 0.6"
- "PyYAML >= 3.11"
'''
EXAMPLES = '''
# File must be named kubevirt.yaml or kubevirt.yml
# Authenticate with token, and return all virtual machines for all namespaces
plugin: community.general.kubevirt
connections:
- host: https://kubevirt.io
token: xxxxxxxxxxxxxxxx
ssl_verify: false
# Use default config (~/.kube/config) file and active context, and return vms with interfaces
# connected to network myovsnetwork and from namespace vms
plugin: community.general.kubevirt
connections:
- namespaces:
- vms
network_name: myovsnetwork
'''
import json
from ansible_collections.community.kubernetes.plugins.inventory.k8s import K8sInventoryException, InventoryModule as K8sInventoryModule, format_dynamic_api_exc
try:
from openshift.dynamic.exceptions import DynamicApiError
except ImportError:
pass
API_VERSION = 'kubevirt.io/v1alpha3'
class InventoryModule(K8sInventoryModule):
NAME = 'community.general.kubevirt'
def setup(self, config_data, cache, cache_key):
self.config_data = config_data
super(InventoryModule, self).setup(config_data, cache, cache_key)
def fetch_objects(self, connections):
client = self.get_api_client()
vm_format = self.config_data.get('host_format', '{namespace}-{name}-{uid}')
if connections:
for connection in connections:
client = self.get_api_client(**connection)
name = connection.get('name', self.get_default_host_name(client.configuration.host))
if connection.get('namespaces'):
namespaces = connection['namespaces']
else:
namespaces = self.get_available_namespaces(client)
interface_name = connection.get('network_name')
api_version = connection.get('api_version', API_VERSION)
annotation_variable = connection.get('annotation_variable', 'ansible')
for namespace in namespaces:
self.get_vms_for_namespace(client, name, namespace, vm_format, interface_name, api_version, annotation_variable)
else:
name = self.get_default_host_name(client.configuration.host)
namespaces = self.get_available_namespaces(client)
for namespace in namespaces:
self.get_vms_for_namespace(client, name, namespace, vm_format, None, api_version, annotation_variable)
def get_vms_for_namespace(self, client, name, namespace, name_format, interface_name=None, api_version=None, annotation_variable=None):
v1_vm = client.resources.get(api_version=api_version, kind='VirtualMachineInstance')
try:
obj = v1_vm.get(namespace=namespace)
except DynamicApiError as exc:
self.display.debug(exc)
raise K8sInventoryException('Error fetching Virtual Machines list: %s' % format_dynamic_api_exc(exc))
namespace_group = 'namespace_{0}'.format(namespace)
namespace_vms_group = '{0}_vms'.format(namespace_group)
name = self._sanitize_group_name(name)
namespace_group = self._sanitize_group_name(namespace_group)
namespace_vms_group = self._sanitize_group_name(namespace_vms_group)
self.inventory.add_group(name)
self.inventory.add_group(namespace_group)
self.inventory.add_child(name, namespace_group)
self.inventory.add_group(namespace_vms_group)
self.inventory.add_child(namespace_group, namespace_vms_group)
for vm in obj.items:
if not (vm.status and vm.status.interfaces):
continue
# Find interface by its name:
if interface_name is None:
interface = vm.status.interfaces[0]
else:
interface = next(
(i for i in vm.status.interfaces if i.name == interface_name),
None
)
# If interface is not found or IP address is not reported skip this VM:
if interface is None or interface.ipAddress is None:
continue
vm_name = name_format.format(namespace=vm.metadata.namespace, name=vm.metadata.name, uid=vm.metadata.uid)
vm_ip = interface.ipAddress
vm_annotations = {} if not vm.metadata.annotations else dict(vm.metadata.annotations)
self.inventory.add_host(vm_name)
if vm.metadata.labels:
# create a group for each label_value
for key, value in vm.metadata.labels:
group_name = 'label_{0}_{1}'.format(key, value)
group_name = self._sanitize_group_name(group_name)
self.inventory.add_group(group_name)
self.inventory.add_child(group_name, vm_name)
vm_labels = dict(vm.metadata.labels)
else:
vm_labels = {}
self.inventory.add_child(namespace_vms_group, vm_name)
# add hostvars
self.inventory.set_variable(vm_name, 'ansible_host', vm_ip)
self.inventory.set_variable(vm_name, 'labels', vm_labels)
self.inventory.set_variable(vm_name, 'annotations', vm_annotations)
self.inventory.set_variable(vm_name, 'object_type', 'vm')
self.inventory.set_variable(vm_name, 'resource_version', vm.metadata.resourceVersion)
self.inventory.set_variable(vm_name, 'uid', vm.metadata.uid)
# Add all variables which are listed in 'ansible' annotation:
annotations_data = json.loads(vm_annotations.get(annotation_variable, "{}"))
for k, v in annotations_data.items():
self.inventory.set_variable(vm_name, k, v)
def verify_file(self, path):
if super(InventoryModule, self).verify_file(path):
if path.endswith(('kubevirt.yml', 'kubevirt.yaml')):
return True
return False

View file

@ -1,465 +0,0 @@
# -*- coding: utf-8 -*-
#
# Copyright (c) 2018, KubeVirt Team <@kubevirt>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from collections import defaultdict
from distutils.version import Version
from ansible.module_utils.common import dict_transformations
from ansible.module_utils.common._collections_compat import Sequence
from ansible_collections.community.kubernetes.plugins.module_utils.common import list_dict_str
from ansible_collections.community.kubernetes.plugins.module_utils.raw import KubernetesRawModule
import copy
import re
MAX_SUPPORTED_API_VERSION = 'v1alpha3'
API_GROUP = 'kubevirt.io'
# Put all args that (can) modify 'spec:' here:
VM_SPEC_DEF_ARG_SPEC = {
'resource_definition': {
'type': 'dict',
'aliases': ['definition', 'inline']
},
'memory': {'type': 'str'},
'memory_limit': {'type': 'str'},
'cpu_cores': {'type': 'int'},
'disks': {'type': 'list'},
'labels': {'type': 'dict'},
'interfaces': {'type': 'list'},
'machine_type': {'type': 'str'},
'cloud_init_nocloud': {'type': 'dict'},
'bootloader': {'type': 'str'},
'smbios_uuid': {'type': 'str'},
'cpu_model': {'type': 'str'},
'headless': {'type': 'str'},
'hugepage_size': {'type': 'str'},
'tablets': {'type': 'list'},
'cpu_limit': {'type': 'int'},
'cpu_shares': {'type': 'int'},
'cpu_features': {'type': 'list'},
'affinity': {'type': 'dict'},
'anti_affinity': {'type': 'dict'},
'node_affinity': {'type': 'dict'},
}
# And other common args go here:
VM_COMMON_ARG_SPEC = {
'name': {'required': True},
'namespace': {'required': True},
'hostname': {'type': 'str'},
'subdomain': {'type': 'str'},
'state': {
'default': 'present',
'choices': ['present', 'absent'],
},
'force': {
'type': 'bool',
'default': False,
},
'merge_type': {'type': 'list', 'choices': ['json', 'merge', 'strategic-merge']},
'wait': {'type': 'bool', 'default': True},
'wait_timeout': {'type': 'int', 'default': 120},
'wait_sleep': {'type': 'int', 'default': 5},
}
VM_COMMON_ARG_SPEC.update(VM_SPEC_DEF_ARG_SPEC)
def virtdict():
"""
This function create dictionary, with defaults to dictionary.
"""
return defaultdict(virtdict)
class KubeAPIVersion(Version):
component_re = re.compile(r'(\d+ | [a-z]+)', re.VERBOSE)
def __init__(self, vstring=None):
if vstring:
self.parse(vstring)
def parse(self, vstring):
self.vstring = vstring
components = [x for x in self.component_re.split(vstring) if x]
for i, obj in enumerate(components):
try:
components[i] = int(obj)
except ValueError:
pass
errmsg = "version '{0}' does not conform to kubernetes api versioning guidelines".format(vstring)
c = components
if len(c) not in (2, 4) or c[0] != 'v' or not isinstance(c[1], int):
raise ValueError(errmsg)
if len(c) == 4 and (c[2] not in ('alpha', 'beta') or not isinstance(c[3], int)):
raise ValueError(errmsg)
self.version = components
def __str__(self):
return self.vstring
def __repr__(self):
return "KubeAPIVersion ('{0}')".format(str(self))
def _cmp(self, other):
if isinstance(other, str):
other = KubeAPIVersion(other)
myver = self.version
otherver = other.version
for ver in myver, otherver:
if len(ver) == 2:
ver.extend(['zeta', 9999])
if myver == otherver:
return 0
if myver < otherver:
return -1
if myver > otherver:
return 1
# python2 compatibility
def __cmp__(self, other):
return self._cmp(other)
class KubeVirtRawModule(KubernetesRawModule):
def __init__(self, *args, **kwargs):
super(KubeVirtRawModule, self).__init__(*args, **kwargs)
@staticmethod
def merge_dicts(base_dict, merging_dicts):
"""This function merges a base dictionary with one or more other dictionaries.
The base dictionary takes precedence when there is a key collision.
merging_dicts can be a dict or a list or tuple of dicts. In the latter case, the
dictionaries at the front of the list have higher precedence over the ones at the end.
"""
if not merging_dicts:
merging_dicts = ({},)
if not isinstance(merging_dicts, Sequence):
merging_dicts = (merging_dicts,)
new_dict = {}
for d in reversed(merging_dicts):
new_dict = dict_transformations.dict_merge(new_dict, d)
new_dict = dict_transformations.dict_merge(new_dict, base_dict)
return new_dict
def get_resource(self, resource):
try:
existing = resource.get(name=self.name, namespace=self.namespace)
except Exception:
existing = None
return existing
def _define_datavolumes(self, datavolumes, spec):
"""
Takes datavoulmes parameter of Ansible and create kubevirt API datavolumesTemplateSpec
structure from it
"""
if not datavolumes:
return
spec['dataVolumeTemplates'] = []
for dv in datavolumes:
# Add datavolume to datavolumetemplates spec:
dvt = virtdict()
dvt['metadata']['name'] = dv.get('name')
dvt['spec']['pvc'] = {
'accessModes': dv.get('pvc').get('accessModes'),
'resources': {
'requests': {
'storage': dv.get('pvc').get('storage'),
}
}
}
dvt['spec']['source'] = dv.get('source')
spec['dataVolumeTemplates'].append(dvt)
# Add datavolume to disks spec:
if not spec['template']['spec']['domain']['devices']['disks']:
spec['template']['spec']['domain']['devices']['disks'] = []
spec['template']['spec']['domain']['devices']['disks'].append(
{
'name': dv.get('name'),
'disk': dv.get('disk', {'bus': 'virtio'}),
}
)
# Add datavolume to volumes spec:
if not spec['template']['spec']['volumes']:
spec['template']['spec']['volumes'] = []
spec['template']['spec']['volumes'].append(
{
'dataVolume': {
'name': dv.get('name')
},
'name': dv.get('name'),
}
)
def _define_cloud_init(self, cloud_init_nocloud, template_spec):
"""
Takes the user's cloud_init_nocloud parameter and fill it in kubevirt
API strucuture. The name for disk is hardcoded to ansiblecloudinitdisk.
"""
if cloud_init_nocloud:
if not template_spec['volumes']:
template_spec['volumes'] = []
if not template_spec['domain']['devices']['disks']:
template_spec['domain']['devices']['disks'] = []
template_spec['volumes'].append({'name': 'ansiblecloudinitdisk', 'cloudInitNoCloud': cloud_init_nocloud})
template_spec['domain']['devices']['disks'].append({
'name': 'ansiblecloudinitdisk',
'disk': {'bus': 'virtio'},
})
def _define_interfaces(self, interfaces, template_spec, defaults):
"""
Takes interfaces parameter of Ansible and create kubevirt API interfaces
and networks strucutre out from it.
"""
if not interfaces and defaults and 'interfaces' in defaults:
interfaces = copy.deepcopy(defaults['interfaces'])
for d in interfaces:
d['network'] = defaults['networks'][0]
if interfaces:
# Extract interfaces k8s specification from interfaces list passed to Ansible:
spec_interfaces = []
for i in interfaces:
spec_interfaces.append(
self.merge_dicts(dict((k, v) for k, v in i.items() if k != 'network'), defaults['interfaces'])
)
if 'interfaces' not in template_spec['domain']['devices']:
template_spec['domain']['devices']['interfaces'] = []
template_spec['domain']['devices']['interfaces'].extend(spec_interfaces)
# Extract networks k8s specification from interfaces list passed to Ansible:
spec_networks = []
for i in interfaces:
net = i['network']
net['name'] = i['name']
spec_networks.append(self.merge_dicts(net, defaults['networks']))
if 'networks' not in template_spec:
template_spec['networks'] = []
template_spec['networks'].extend(spec_networks)
def _define_disks(self, disks, template_spec, defaults):
"""
Takes disks parameter of Ansible and create kubevirt API disks and
volumes strucutre out from it.
"""
if not disks and defaults and 'disks' in defaults:
disks = copy.deepcopy(defaults['disks'])
for d in disks:
d['volume'] = defaults['volumes'][0]
if disks:
# Extract k8s specification from disks list passed to Ansible:
spec_disks = []
for d in disks:
spec_disks.append(
self.merge_dicts(dict((k, v) for k, v in d.items() if k != 'volume'), defaults['disks'])
)
if 'disks' not in template_spec['domain']['devices']:
template_spec['domain']['devices']['disks'] = []
template_spec['domain']['devices']['disks'].extend(spec_disks)
# Extract volumes k8s specification from disks list passed to Ansible:
spec_volumes = []
for d in disks:
volume = d['volume']
volume['name'] = d['name']
spec_volumes.append(self.merge_dicts(volume, defaults['volumes']))
if 'volumes' not in template_spec:
template_spec['volumes'] = []
template_spec['volumes'].extend(spec_volumes)
def find_supported_resource(self, kind):
results = self.client.resources.search(kind=kind, group=API_GROUP)
if not results:
self.fail('Failed to find resource {0} in {1}'.format(kind, API_GROUP))
sr = sorted(results, key=lambda r: KubeAPIVersion(r.api_version), reverse=True)
for r in sr:
if KubeAPIVersion(r.api_version) <= KubeAPIVersion(MAX_SUPPORTED_API_VERSION):
return r
self.fail("API versions {0} are too recent. Max supported is {1}/{2}.".format(
str([r.api_version for r in sr]), API_GROUP, MAX_SUPPORTED_API_VERSION))
def _construct_vm_definition(self, kind, definition, template, params, defaults=None):
self.client = self.get_api_client()
disks = params.get('disks', [])
memory = params.get('memory')
memory_limit = params.get('memory_limit')
cpu_cores = params.get('cpu_cores')
cpu_model = params.get('cpu_model')
cpu_features = params.get('cpu_features')
labels = params.get('labels')
datavolumes = params.get('datavolumes')
interfaces = params.get('interfaces')
bootloader = params.get('bootloader')
cloud_init_nocloud = params.get('cloud_init_nocloud')
machine_type = params.get('machine_type')
headless = params.get('headless')
smbios_uuid = params.get('smbios_uuid')
hugepage_size = params.get('hugepage_size')
tablets = params.get('tablets')
cpu_shares = params.get('cpu_shares')
cpu_limit = params.get('cpu_limit')
node_affinity = params.get('node_affinity')
vm_affinity = params.get('affinity')
vm_anti_affinity = params.get('anti_affinity')
hostname = params.get('hostname')
subdomain = params.get('subdomain')
template_spec = template['spec']
# Merge additional flat parameters:
if memory:
template_spec['domain']['resources']['requests']['memory'] = memory
if cpu_shares:
template_spec['domain']['resources']['requests']['cpu'] = cpu_shares
if cpu_limit:
template_spec['domain']['resources']['limits']['cpu'] = cpu_limit
if tablets:
for tablet in tablets:
tablet['type'] = 'tablet'
template_spec['domain']['devices']['inputs'] = tablets
if memory_limit:
template_spec['domain']['resources']['limits']['memory'] = memory_limit
if hugepage_size is not None:
template_spec['domain']['memory']['hugepages']['pageSize'] = hugepage_size
if cpu_features is not None:
template_spec['domain']['cpu']['features'] = cpu_features
if cpu_cores is not None:
template_spec['domain']['cpu']['cores'] = cpu_cores
if cpu_model:
template_spec['domain']['cpu']['model'] = cpu_model
if labels:
template['metadata']['labels'] = self.merge_dicts(labels, template['metadata']['labels'])
if machine_type:
template_spec['domain']['machine']['type'] = machine_type
if bootloader:
template_spec['domain']['firmware']['bootloader'] = {bootloader: {}}
if smbios_uuid:
template_spec['domain']['firmware']['uuid'] = smbios_uuid
if headless is not None:
template_spec['domain']['devices']['autoattachGraphicsDevice'] = not headless
if vm_affinity or vm_anti_affinity:
vms_affinity = vm_affinity or vm_anti_affinity
affinity_name = 'podAffinity' if vm_affinity else 'podAntiAffinity'
for affinity in vms_affinity.get('soft', []):
if not template_spec['affinity'][affinity_name]['preferredDuringSchedulingIgnoredDuringExecution']:
template_spec['affinity'][affinity_name]['preferredDuringSchedulingIgnoredDuringExecution'] = []
template_spec['affinity'][affinity_name]['preferredDuringSchedulingIgnoredDuringExecution'].append({
'weight': affinity.get('weight'),
'podAffinityTerm': {
'labelSelector': {
'matchExpressions': affinity.get('term').get('match_expressions'),
},
'topologyKey': affinity.get('topology_key'),
},
})
for affinity in vms_affinity.get('hard', []):
if not template_spec['affinity'][affinity_name]['requiredDuringSchedulingIgnoredDuringExecution']:
template_spec['affinity'][affinity_name]['requiredDuringSchedulingIgnoredDuringExecution'] = []
template_spec['affinity'][affinity_name]['requiredDuringSchedulingIgnoredDuringExecution'].append({
'labelSelector': {
'matchExpressions': affinity.get('term').get('match_expressions'),
},
'topologyKey': affinity.get('topology_key'),
})
if node_affinity:
for affinity in node_affinity.get('soft', []):
if not template_spec['affinity']['nodeAffinity']['preferredDuringSchedulingIgnoredDuringExecution']:
template_spec['affinity']['nodeAffinity']['preferredDuringSchedulingIgnoredDuringExecution'] = []
template_spec['affinity']['nodeAffinity']['preferredDuringSchedulingIgnoredDuringExecution'].append({
'weight': affinity.get('weight'),
'preference': {
'matchExpressions': affinity.get('term').get('match_expressions'),
}
})
for affinity in node_affinity.get('hard', []):
if not template_spec['affinity']['nodeAffinity']['requiredDuringSchedulingIgnoredDuringExecution']['nodeSelectorTerms']:
template_spec['affinity']['nodeAffinity']['requiredDuringSchedulingIgnoredDuringExecution']['nodeSelectorTerms'] = []
template_spec['affinity']['nodeAffinity']['requiredDuringSchedulingIgnoredDuringExecution']['nodeSelectorTerms'].append({
'matchExpressions': affinity.get('term').get('match_expressions'),
})
if hostname:
template_spec['hostname'] = hostname
if subdomain:
template_spec['subdomain'] = subdomain
# Define disks
self._define_disks(disks, template_spec, defaults)
# Define cloud init disk if defined:
# Note, that this must be called after _define_disks, so the cloud_init
# is not first in order and it's not used as boot disk:
self._define_cloud_init(cloud_init_nocloud, template_spec)
# Define interfaces:
self._define_interfaces(interfaces, template_spec, defaults)
# Define datavolumes:
self._define_datavolumes(datavolumes, definition['spec'])
return self.merge_dicts(definition, self.resource_definitions[0])
def construct_vm_definition(self, kind, definition, template, defaults=None):
definition = self._construct_vm_definition(kind, definition, template, self.params, defaults)
resource = self.find_supported_resource(kind)
definition = self.set_defaults(resource, definition)
return resource, definition
def construct_vm_template_definition(self, kind, definition, template, params):
definition = self._construct_vm_definition(kind, definition, template, params)
resource = self.find_resource(kind, definition['apiVersion'], fail=True)
# Set defaults:
definition['kind'] = kind
definition['metadata']['name'] = params.get('name')
definition['metadata']['namespace'] = params.get('namespace')
return resource, definition
def execute_crud(self, kind, definition):
""" Module execution """
resource = self.find_supported_resource(kind)
definition = self.set_defaults(resource, definition)
return self.perform_action(resource, definition)

View file

@ -1,184 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
module: kubevirt_cdi_upload
short_description: Upload local VM images to CDI Upload Proxy.
author: KubeVirt Team (@kubevirt)
description:
- Use Openshift Python SDK to create UploadTokenRequest objects.
- Transfer contents of local files to the CDI Upload Proxy.
options:
pvc_name:
description:
- Use to specify the name of the target PersistentVolumeClaim.
required: true
pvc_namespace:
description:
- Use to specify the namespace of the target PersistentVolumeClaim.
required: true
upload_host:
description:
- URL containing the host and port on which the CDI Upload Proxy is available.
- "More info: U(https://github.com/kubevirt/containerized-data-importer/blob/master/doc/upload.md#expose-cdi-uploadproxy-service)"
upload_host_validate_certs:
description:
- Whether or not to verify the CDI Upload Proxy's SSL certificates against your system's CA trust store.
default: true
type: bool
aliases: [ upload_host_verify_ssl ]
path:
description:
- Path of local image file to transfer.
merge_type:
description:
- Whether to override the default patch merge approach with a specific type. By default, the strategic
merge will typically be used.
type: list
choices: [ json, merge, strategic-merge ]
extends_documentation_fragment:
- community.kubernetes.k8s_auth_options
requirements:
- python >= 2.7
- openshift >= 0.8.2
- requests >= 2.0.0
'''
EXAMPLES = '''
- name: Upload local image to pvc-vm1
community.general.kubevirt_cdi_upload:
pvc_namespace: default
pvc_name: pvc-vm1
upload_host: https://localhost:8443
upload_host_validate_certs: false
path: /tmp/cirros-0.4.0-x86_64-disk.img
'''
RETURN = '''# '''
import copy
import traceback
from collections import defaultdict
from ansible_collections.community.kubernetes.plugins.module_utils.common import AUTH_ARG_SPEC
from ansible_collections.community.kubernetes.plugins.module_utils.raw import KubernetesRawModule
# 3rd party imports
try:
import requests
HAS_REQUESTS = True
except ImportError:
HAS_REQUESTS = False
SERVICE_ARG_SPEC = {
'pvc_name': {'required': True},
'pvc_namespace': {'required': True},
'upload_host': {'required': True},
'upload_host_validate_certs': {
'type': 'bool',
'default': True,
'aliases': ['upload_host_verify_ssl']
},
'path': {'required': True},
'merge_type': {
'type': 'list',
'choices': ['json', 'merge', 'strategic-merge']
},
}
class KubeVirtCDIUpload(KubernetesRawModule):
def __init__(self, *args, **kwargs):
super(KubeVirtCDIUpload, self).__init__(*args, k8s_kind='UploadTokenRequest', **kwargs)
if not HAS_REQUESTS:
self.fail("This module requires the python 'requests' package. Try `pip install requests`.")
@property
def argspec(self):
""" argspec property builder """
argument_spec = copy.deepcopy(AUTH_ARG_SPEC)
argument_spec.update(SERVICE_ARG_SPEC)
return argument_spec
def execute_module(self):
""" Module execution """
API = 'v1alpha1'
KIND = 'UploadTokenRequest'
self.client = self.get_api_client()
api_version = 'upload.cdi.kubevirt.io/{0}'.format(API)
pvc_name = self.params.get('pvc_name')
pvc_namespace = self.params.get('pvc_namespace')
upload_host = self.params.get('upload_host')
upload_host_verify_ssl = self.params.get('upload_host_validate_certs')
path = self.params.get('path')
definition = defaultdict(defaultdict)
definition['kind'] = KIND
definition['apiVersion'] = api_version
def_meta = definition['metadata']
def_meta['name'] = pvc_name
def_meta['namespace'] = pvc_namespace
def_spec = definition['spec']
def_spec['pvcName'] = pvc_name
# Let's check the file's there before we do anything else
imgfile = open(path, 'rb')
resource = self.find_resource(KIND, api_version, fail=True)
definition = self.set_defaults(resource, definition)
result = self.perform_action(resource, definition)
headers = {'Authorization': "Bearer {0}".format(result['result']['status']['token'])}
url = "{0}/{1}/upload".format(upload_host, API)
ret = requests.post(url, data=imgfile, headers=headers, verify=upload_host_verify_ssl)
if ret.status_code != 200:
self.fail_request("Something went wrong while uploading data", method='POST', url=url,
reason=ret.reason, status_code=ret.status_code)
self.exit_json(changed=True)
def fail_request(self, msg, **kwargs):
req_info = {}
for k, v in kwargs.items():
req_info['req_' + k] = v
self.fail_json(msg=msg, **req_info)
def main():
module = KubeVirtCDIUpload()
try:
module.execute_module()
except Exception as e:
module.fail_json(msg=str(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()

View file

@ -1,154 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: kubevirt_preset
short_description: Manage KubeVirt virtual machine presets
description:
- Use Openshift Python SDK to manage the state of KubeVirt virtual machine presets.
author: KubeVirt Team (@kubevirt)
options:
state:
description:
- Create or delete virtual machine presets.
default: "present"
choices:
- present
- absent
type: str
name:
description:
- Name of the virtual machine preset.
required: true
type: str
namespace:
description:
- Namespace where the virtual machine preset exists.
required: true
type: str
selector:
description:
- "Selector is a label query over a set of virtual machine preset."
type: dict
extends_documentation_fragment:
- community.kubernetes.k8s_auth_options
- community.general.kubevirt_vm_options
- community.general.kubevirt_common_options
requirements:
- python >= 2.7
- openshift >= 0.8.2
'''
EXAMPLES = '''
- name: Create virtual machine preset 'vmi-preset-small'
community.general.kubevirt_preset:
state: present
name: vmi-preset-small
namespace: vms
memory: 64M
selector:
matchLabels:
kubevirt.io/vmPreset: vmi-preset-small
- name: Remove virtual machine preset 'vmi-preset-small'
community.general.kubevirt_preset:
state: absent
name: vmi-preset-small
namespace: vms
'''
RETURN = '''
kubevirt_preset:
description:
- The virtual machine preset managed by the user.
- "This dictionary contains all values returned by the KubeVirt API all options
are described here U(https://kubevirt.io/api-reference/master/definitions.html#_v1_virtualmachineinstancepreset)"
returned: success
type: complex
contains: {}
'''
import copy
import traceback
from ansible_collections.community.kubernetes.plugins.module_utils.common import AUTH_ARG_SPEC
from ansible_collections.community.general.plugins.module_utils.kubevirt import (
virtdict,
KubeVirtRawModule,
VM_COMMON_ARG_SPEC
)
KIND = 'VirtualMachineInstancePreset'
VMP_ARG_SPEC = {
'selector': {'type': 'dict'},
}
class KubeVirtVMPreset(KubeVirtRawModule):
@property
def argspec(self):
""" argspec property builder """
argument_spec = copy.deepcopy(AUTH_ARG_SPEC)
argument_spec.update(VM_COMMON_ARG_SPEC)
argument_spec.update(VMP_ARG_SPEC)
return argument_spec
def execute_module(self):
# Parse parameters specific for this module:
definition = virtdict()
selector = self.params.get('selector')
if selector:
definition['spec']['selector'] = selector
# FIXME: Devices must be set, but we don't yet support any
# attributes there, remove when we do:
definition['spec']['domain']['devices'] = dict()
# defaults for template
defaults = {'disks': [], 'volumes': [], 'interfaces': [], 'networks': []}
# Execute the CURD of VM:
dummy, definition = self.construct_vm_definition(KIND, definition, definition, defaults)
result_crud = self.execute_crud(KIND, definition)
changed = result_crud['changed']
result = result_crud.pop('result')
# Return from the module:
self.exit_json(**{
'changed': changed,
'kubevirt_preset': result,
'result': result_crud,
})
def main():
module = KubeVirtVMPreset()
try:
module.execute_module()
except Exception as e:
module.fail_json(msg=str(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()

View file

@ -1,457 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
module: kubevirt_pvc
short_description: Manage PVCs on Kubernetes
author: KubeVirt Team (@kubevirt)
description:
- Use Openshift Python SDK to manage PVCs on Kubernetes
- Support Containerized Data Importer out of the box
options:
resource_definition:
description:
- "A partial YAML definition of the PVC object being created/updated. Here you can define Kubernetes
PVC Resource parameters not covered by this module's parameters."
- "NOTE: I(resource_definition) has lower priority than module parameters. If you try to define e.g.
I(metadata.namespace) here, that value will be ignored and I(namespace) used instead."
aliases:
- definition
- inline
type: dict
state:
description:
- "Determines if an object should be created, patched, or deleted. When set to C(present), an object will be
created, if it does not already exist. If set to C(absent), an existing object will be deleted. If set to
C(present), an existing object will be patched, if its attributes differ from those specified using
module options and I(resource_definition)."
default: present
choices:
- present
- absent
force:
description:
- If set to C(True), and I(state) is C(present), an existing object will be replaced.
default: false
type: bool
merge_type:
description:
- Whether to override the default patch merge approach with a specific type.
- "This defaults to C(['strategic-merge', 'merge']), which is ideal for using the same parameters
on resource kinds that combine Custom Resources and built-in resources."
- See U(https://kubernetes.io/docs/tasks/run-application/update-api-object-kubectl-patch/#use-a-json-merge-patch-to-update-a-deployment)
- If more than one merge_type is given, the merge_types will be tried in order
choices:
- json
- merge
- strategic-merge
type: list
name:
description:
- Use to specify a PVC object name.
required: true
type: str
namespace:
description:
- Use to specify a PVC object namespace.
required: true
type: str
annotations:
description:
- Annotations attached to this object.
- U(https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/)
type: dict
labels:
description:
- Labels attached to this object.
- U(https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/)
type: dict
selector:
description:
- A label query over volumes to consider for binding.
- U(https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/)
type: dict
access_modes:
description:
- Contains the desired access modes the volume should have.
- "More info: U(https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes)"
type: list
size:
description:
- How much storage to allocate to the PVC.
type: str
aliases:
- storage
storage_class_name:
description:
- Name of the StorageClass required by the claim.
- "More info: U(https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1)"
type: str
volume_mode:
description:
- "This defines what type of volume is required by the claim. Value of Filesystem is implied when not
included in claim spec. This is an alpha feature of kubernetes and may change in the future."
type: str
volume_name:
description:
- This is the binding reference to the PersistentVolume backing this claim.
type: str
cdi_source:
description:
- "If data is to be copied onto the PVC using the Containerized Data Importer you can specify the source of
the data (along with any additional configuration) as well as it's format."
- "Valid source types are: blank, http, s3, registry, pvc and upload. The last one requires using the
M(community.general.kubevirt_cdi_upload) module to actually perform an upload."
- "Source data format is specified using the optional I(content_type). Valid options are C(kubevirt)
(default; raw image) and C(archive) (tar.gz)."
- "This uses the DataVolume source syntax:
U(https://github.com/kubevirt/containerized-data-importer/blob/master/doc/datavolumes.md#https3registry-source)"
type: dict
wait:
description:
- "If set, this module will wait for the PVC to become bound and CDI (if enabled) to finish its operation
before returning."
- "Used only if I(state) set to C(present)."
- "Unless used in conjunction with I(cdi_source), this might result in a timeout, as clusters may be configured
to not bind PVCs until first usage."
default: false
type: bool
wait_timeout:
description:
- Specifies how much time in seconds to wait for PVC creation to complete if I(wait) option is enabled.
- Default value is reasonably high due to an expectation that CDI might take a while to finish its operation.
type: int
default: 300
extends_documentation_fragment:
- community.kubernetes.k8s_auth_options
requirements:
- python >= 2.7
- openshift >= 0.8.2
'''
EXAMPLES = '''
- name: Create a PVC and import data from an external source
community.general.kubevirt_pvc:
name: pvc1
namespace: default
size: 100Mi
access_modes:
- ReadWriteOnce
cdi_source:
http:
url: https://www.source.example/path/of/data/vm.img
# If the URL points to a tar.gz containing the disk image, uncomment the line below:
#content_type: archive
- name: Create a PVC as a clone from a different PVC
community.general.kubevirt_pvc:
name: pvc2
namespace: default
size: 100Mi
access_modes:
- ReadWriteOnce
cdi_source:
pvc:
namespace: source-ns
name: source-pvc
- name: Create a PVC ready for data upload
community.general.kubevirt_pvc:
name: pvc3
namespace: default
size: 100Mi
access_modes:
- ReadWriteOnce
cdi_source:
upload: yes
# You need the kubevirt_cdi_upload module to actually upload something
- name: Create a PVC with a blank raw image
community.general.kubevirt_pvc:
name: pvc4
namespace: default
size: 100Mi
access_modes:
- ReadWriteOnce
cdi_source:
blank: yes
- name: Create a PVC and fill it with data from a container
community.general.kubevirt_pvc:
name: pvc5
namespace: default
size: 100Mi
access_modes:
- ReadWriteOnce
cdi_source:
registry:
url: "docker://kubevirt/fedora-cloud-registry-disk-demo"
'''
RETURN = '''
result:
description:
- The created, patched, or otherwise present object. Will be empty in the case of a deletion.
returned: success
type: complex
contains:
api_version:
description: The versioned schema of this representation of an object.
returned: success
type: str
kind:
description: Represents the REST resource this object represents.
returned: success
type: str
metadata:
description: Standard object metadata. Includes name, namespace, annotations, labels, etc.
returned: success
type: complex
spec:
description: Specific attributes of the object. Will vary based on the I(api_version) and I(kind).
returned: success
type: complex
status:
description: Current status details for the object.
returned: success
type: complex
items:
description: Returned only when multiple yaml documents are passed to src or resource_definition
returned: when resource_definition or src contains list of objects
type: list
duration:
description: elapsed time of task in seconds
returned: when C(wait) is true
type: int
sample: 48
'''
import copy
import traceback
from collections import defaultdict
from ansible_collections.community.kubernetes.plugins.module_utils.common import AUTH_ARG_SPEC
from ansible_collections.community.kubernetes.plugins.module_utils.raw import KubernetesRawModule
from ansible_collections.community.general.plugins.module_utils.kubevirt import virtdict, KubeVirtRawModule
PVC_ARG_SPEC = {
'name': {'required': True},
'namespace': {'required': True},
'state': {
'type': 'str',
'choices': [
'present', 'absent'
],
'default': 'present'
},
'force': {
'type': 'bool',
'default': False,
},
'merge_type': {
'type': 'list',
'choices': ['json', 'merge', 'strategic-merge']
},
'resource_definition': {
'type': 'dict',
'aliases': ['definition', 'inline']
},
'labels': {'type': 'dict'},
'annotations': {'type': 'dict'},
'selector': {'type': 'dict'},
'access_modes': {'type': 'list'},
'size': {
'type': 'str',
'aliases': ['storage']
},
'storage_class_name': {'type': 'str'},
'volume_mode': {'type': 'str'},
'volume_name': {'type': 'str'},
'cdi_source': {'type': 'dict'},
'wait': {
'type': 'bool',
'default': False
},
'wait_timeout': {
'type': 'int',
'default': 300
}
}
class CreatePVCFailed(Exception):
pass
class KubevirtPVC(KubernetesRawModule):
def __init__(self):
super(KubevirtPVC, self).__init__()
@property
def argspec(self):
argument_spec = copy.deepcopy(AUTH_ARG_SPEC)
argument_spec.update(PVC_ARG_SPEC)
return argument_spec
@staticmethod
def fix_serialization(obj):
if obj and hasattr(obj, 'to_dict'):
return obj.to_dict()
return obj
def _parse_cdi_source(self, _cdi_src, metadata):
cdi_src = copy.deepcopy(_cdi_src)
annotations = metadata['annotations']
labels = metadata['labels']
valid_content_types = ('kubevirt', 'archive')
valid_sources = ('http', 's3', 'pvc', 'upload', 'blank', 'registry')
if 'content_type' in cdi_src:
content_type = cdi_src.pop('content_type')
if content_type not in valid_content_types:
raise ValueError("cdi_source.content_type must be one of {0}, not: '{1}'".format(
valid_content_types, content_type))
annotations['cdi.kubevirt.io/storage.contentType'] = content_type
if len(cdi_src) != 1:
raise ValueError("You must specify exactly one valid CDI source, not {0}: {1}".format(len(cdi_src), tuple(cdi_src.keys())))
src_type = tuple(cdi_src.keys())[0]
src_spec = cdi_src[src_type]
if src_type not in valid_sources:
raise ValueError("Got an invalid CDI source type: '{0}', must be one of {1}".format(src_type, valid_sources))
# True for all cases save one
labels['app'] = 'containerized-data-importer'
if src_type == 'upload':
annotations['cdi.kubevirt.io/storage.upload.target'] = ''
elif src_type == 'blank':
annotations['cdi.kubevirt.io/storage.import.source'] = 'none'
elif src_type == 'pvc':
if not isinstance(src_spec, dict) or sorted(src_spec.keys()) != ['name', 'namespace']:
raise ValueError("CDI Source 'pvc' requires specifying 'name' and 'namespace' (and nothing else)")
labels['app'] = 'host-assisted-cloning'
annotations['k8s.io/CloneRequest'] = '{0}/{1}'.format(src_spec['namespace'], src_spec['name'])
elif src_type in ('http', 's3', 'registry'):
if not isinstance(src_spec, dict) or 'url' not in src_spec:
raise ValueError("CDI Source '{0}' requires specifying 'url'".format(src_type))
unknown_params = set(src_spec.keys()).difference(set(('url', 'secretRef', 'certConfigMap')))
if unknown_params:
raise ValueError("CDI Source '{0}' does not know recognize params: {1}".format(src_type, tuple(unknown_params)))
annotations['cdi.kubevirt.io/storage.import.source'] = src_type
annotations['cdi.kubevirt.io/storage.import.endpoint'] = src_spec['url']
if 'secretRef' in src_spec:
annotations['cdi.kubevirt.io/storage.import.secretName'] = src_spec['secretRef']
if 'certConfigMap' in src_spec:
annotations['cdi.kubevirt.io/storage.import.certConfigMap'] = src_spec['certConfigMap']
def _wait_for_creation(self, resource, uid):
return_obj = None
desired_cdi_status = 'Succeeded'
use_cdi = True if self.params.get('cdi_source') else False
if use_cdi and 'upload' in self.params['cdi_source']:
desired_cdi_status = 'Running'
for event in resource.watch(namespace=self.namespace, timeout=self.params.get('wait_timeout')):
entity = event['object']
metadata = entity.metadata
if not hasattr(metadata, 'uid') or metadata.uid != uid:
continue
if entity.status.phase == 'Bound':
if use_cdi and hasattr(metadata, 'annotations'):
import_status = metadata.annotations.get('cdi.kubevirt.io/storage.pod.phase')
if import_status == desired_cdi_status:
return_obj = entity
break
elif import_status == 'Failed':
raise CreatePVCFailed("PVC creation incomplete; importing data failed")
else:
return_obj = entity
break
elif entity.status.phase == 'Failed':
raise CreatePVCFailed("PVC creation failed")
if not return_obj:
raise CreatePVCFailed("PVC creation timed out")
return self.fix_serialization(return_obj)
def execute_module(self):
KIND = 'PersistentVolumeClaim'
API = 'v1'
definition = virtdict()
definition['kind'] = KIND
definition['apiVersion'] = API
metadata = definition['metadata']
metadata['name'] = self.params.get('name')
metadata['namespace'] = self.params.get('namespace')
if self.params.get('annotations'):
metadata['annotations'] = self.params.get('annotations')
if self.params.get('labels'):
metadata['labels'] = self.params.get('labels')
if self.params.get('cdi_source'):
self._parse_cdi_source(self.params.get('cdi_source'), metadata)
spec = definition['spec']
if self.params.get('access_modes'):
spec['accessModes'] = self.params.get('access_modes')
if self.params.get('size'):
spec['resources']['requests']['storage'] = self.params.get('size')
if self.params.get('storage_class_name'):
spec['storageClassName'] = self.params.get('storage_class_name')
if self.params.get('selector'):
spec['selector'] = self.params.get('selector')
if self.params.get('volume_mode'):
spec['volumeMode'] = self.params.get('volume_mode')
if self.params.get('volume_name'):
spec['volumeName'] = self.params.get('volume_name')
# 'resource_definition:' has lower priority than module parameters
definition = dict(KubeVirtRawModule.merge_dicts(definition, self.resource_definitions[0]))
self.client = self.get_api_client()
resource = self.find_resource(KIND, API, fail=True)
definition = self.set_defaults(resource, definition)
result = self.perform_action(resource, definition)
if self.params.get('wait') and self.params.get('state') == 'present':
result['result'] = self._wait_for_creation(resource, result['result']['metadata']['uid'])
self.exit_json(**result)
def main():
module = KubevirtPVC()
try:
module.execute_module()
except Exception as e:
module.fail_json(msg=str(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()

View file

@ -1,211 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: kubevirt_rs
short_description: Manage KubeVirt virtual machine replica sets
description:
- Use Openshift Python SDK to manage the state of KubeVirt virtual machine replica sets.
author: KubeVirt Team (@kubevirt)
options:
state:
description:
- Create or delete virtual machine replica sets.
default: "present"
choices:
- present
- absent
type: str
name:
description:
- Name of the virtual machine replica set.
required: true
type: str
namespace:
description:
- Namespace where the virtual machine replica set exists.
required: true
type: str
selector:
description:
- "Selector is a label query over a set of virtual machine."
required: true
type: dict
replicas:
description:
- Number of desired pods. This is a pointer to distinguish between explicit zero and not specified.
- Replicas defaults to 1 if newly created replica set.
type: int
extends_documentation_fragment:
- community.kubernetes.k8s_auth_options
- community.general.kubevirt_vm_options
- community.general.kubevirt_common_options
requirements:
- python >= 2.7
- openshift >= 0.8.2
'''
EXAMPLES = '''
- name: Create virtual machine replica set 'myvmir'
community.general.kubevirt_rs:
state: present
name: myvmir
namespace: vms
wait: true
replicas: 3
memory: 64M
labels:
myvmi: myvmi
selector:
matchLabels:
myvmi: myvmi
disks:
- name: containerdisk
volume:
containerDisk:
image: kubevirt/cirros-container-disk-demo:latest
path: /custom-disk/cirros.img
disk:
bus: virtio
- name: Remove virtual machine replica set 'myvmir'
community.general.kubevirt_rs:
state: absent
name: myvmir
namespace: vms
wait: true
'''
RETURN = '''
kubevirt_rs:
description:
- The virtual machine virtual machine replica set managed by the user.
- "This dictionary contains all values returned by the KubeVirt API all options
are described here U(https://kubevirt.io/api-reference/master/definitions.html#_v1_virtualmachineinstance)"
returned: success
type: complex
contains: {}
'''
import copy
import traceback
from ansible_collections.community.kubernetes.plugins.module_utils.common import AUTH_ARG_SPEC
from ansible_collections.community.general.plugins.module_utils.kubevirt import (
virtdict,
KubeVirtRawModule,
VM_COMMON_ARG_SPEC,
)
KIND = 'VirtualMachineInstanceReplicaSet'
VMIR_ARG_SPEC = {
'replicas': {'type': 'int'},
'selector': {'type': 'dict'},
}
class KubeVirtVMIRS(KubeVirtRawModule):
@property
def argspec(self):
""" argspec property builder """
argument_spec = copy.deepcopy(AUTH_ARG_SPEC)
argument_spec.update(copy.deepcopy(VM_COMMON_ARG_SPEC))
argument_spec.update(copy.deepcopy(VMIR_ARG_SPEC))
return argument_spec
def wait_for_replicas(self, replicas):
""" Wait for ready_replicas to equal the requested number of replicas. """
resource = self.find_supported_resource(KIND)
return_obj = None
for event in resource.watch(namespace=self.namespace, timeout=self.params.get('wait_timeout')):
entity = event['object']
if entity.metadata.name != self.name:
continue
status = entity.get('status', {})
readyReplicas = status.get('readyReplicas', 0)
if readyReplicas == replicas:
return_obj = entity
break
if not return_obj:
self.fail_json(msg="Error fetching the patched object. Try a higher wait_timeout value.")
if replicas and return_obj.status.readyReplicas is None:
self.fail_json(msg="Failed to fetch the number of ready replicas. Try a higher wait_timeout value.")
if replicas and return_obj.status.readyReplicas != replicas:
self.fail_json(msg="Number of ready replicas is {0}. Failed to reach {1} ready replicas within "
"the wait_timeout period.".format(return_obj.status.ready_replicas, replicas))
return return_obj.to_dict()
def execute_module(self):
# Parse parameters specific for this module:
definition = virtdict()
selector = self.params.get('selector')
replicas = self.params.get('replicas')
if selector:
definition['spec']['selector'] = selector
if replicas is not None:
definition['spec']['replicas'] = replicas
# defaults for template
defaults = {'disks': [], 'volumes': [], 'interfaces': [], 'networks': []}
# Execute the CURD of VM:
template = definition['spec']['template']
dummy, definition = self.construct_vm_definition(KIND, definition, template, defaults)
result_crud = self.execute_crud(KIND, definition)
changed = result_crud['changed']
result = result_crud.pop('result')
# When creating a new VMIRS object without specifying `replicas`, assume it's '1' to make the
# wait logic work correctly
if changed and result_crud['method'] == 'create' and replicas is None:
replicas = 1
# Wait for the new number of ready replicas after a CRUD update
# Note1: doesn't work correctly when reducing number of replicas due to how VMIRS works (as of kubevirt 1.5.0)
# Note2: not the place to wait for the VMIs to get deleted when deleting the VMIRS object; that *might* be
# achievable in execute_crud(); keywords: orphanDependents, propagationPolicy, DeleteOptions
if self.params.get('wait') and replicas is not None and self.params.get('state') == 'present':
result = self.wait_for_replicas(replicas)
# Return from the module:
self.exit_json(**{
'changed': changed,
'kubevirt_rs': result,
'result': result_crud,
})
def main():
module = KubeVirtVMIRS()
try:
module.execute_module()
except Exception as e:
module.fail_json(msg=str(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()

View file

@ -1,385 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: kubevirt_template
short_description: Manage KubeVirt templates
description:
- Use Openshift Python SDK to manage the state of KubeVirt templates.
author: KubeVirt Team (@kubevirt)
options:
name:
description:
- Name of the Template object.
required: true
type: str
namespace:
description:
- Namespace where the Template object exists.
required: true
type: str
objects:
description:
- List of any valid API objects, such as a I(DeploymentConfig), I(Service), etc. The object
will be created exactly as defined here, with any parameter values substituted in prior to creation.
The definition of these objects can reference parameters defined earlier.
- As part of the list user can pass also I(VirtualMachine) kind. When passing I(VirtualMachine)
user must use Ansible structure of the parameters not the Kubernetes API structure. For more information
please take a look at M(community.general.kubevirt_vm) module and at EXAMPLES section, where you can see example.
type: list
merge_type:
description:
- Whether to override the default patch merge approach with a specific type. By default, the strategic
merge will typically be used.
type: list
choices: [ json, merge, strategic-merge ]
display_name:
description:
- "A brief, user-friendly name, which can be employed by user interfaces."
type: str
description:
description:
- A description of the template.
- Include enough detail that the user will understand what is being deployed...
and any caveats they need to know before deploying. It should also provide links to additional information,
such as a README file."
type: str
long_description:
description:
- "Additional template description. This may be displayed by the service catalog, for example."
type: str
provider_display_name:
description:
- "The name of the person or organization providing the template."
type: str
documentation_url:
description:
- "A URL referencing further documentation for the template."
type: str
support_url:
description:
- "A URL where support can be obtained for the template."
type: str
editable:
description:
- "Extension for hinting at which elements should be considered editable.
List of jsonpath selectors. The jsonpath root is the objects: element of the template."
- This is parameter can be used only when kubevirt addon is installed on your openshift cluster.
type: list
default_disk:
description:
- "The goal of default disk is to define what kind of disk is supported by the OS mainly in
terms of bus (ide, scsi, sata, virtio, ...)"
- The C(default_disk) parameter define configuration overlay for disks that will be applied on top of disks
during virtual machine creation to define global compatibility and/or performance defaults defined here.
- This is parameter can be used only when kubevirt addon is installed on your openshift cluster.
type: dict
default_volume:
description:
- "The goal of default volume is to be able to configure mostly performance parameters like
caches if those are exposed by the underlying volume implementation."
- The C(default_volume) parameter define configuration overlay for volumes that will be applied on top of volumes
during virtual machine creation to define global compatibility and/or performance defaults defined here.
- This is parameter can be used only when kubevirt addon is installed on your openshift cluster.
type: dict
default_nic:
description:
- "The goal of default network is similar to I(default_disk) and should be used as a template
to ensure OS compatibility and performance."
- The C(default_nic) parameter define configuration overlay for nic that will be applied on top of nics
during virtual machine creation to define global compatibility and/or performance defaults defined here.
- This is parameter can be used only when kubevirt addon is installed on your openshift cluster.
type: dict
default_network:
description:
- "The goal of default network is similar to I(default_volume) and should be used as a template
that specifies performance and connection parameters (L2 bridge for example)"
- The C(default_network) parameter define configuration overlay for networks that will be applied on top of networks
during virtual machine creation to define global compatibility and/or performance defaults defined here.
- This is parameter can be used only when kubevirt addon is installed on your openshift cluster.
type: dict
icon_class:
description:
- "An icon to be displayed with your template in the web console. Choose from our existing logo
icons when possible. You can also use icons from FontAwesome. Alternatively, provide icons through
CSS customizations that can be added to an OpenShift Container Platform cluster that uses your template.
You must specify an icon class that exists, or it will prevent falling back to the generic icon."
type: str
parameters:
description:
- "Parameters allow a value to be supplied by the user or generated when the template is instantiated.
Then, that value is substituted wherever the parameter is referenced. References can be defined in any
field in the objects list field. This is useful for generating random passwords or allowing the user to
supply a host name or other user-specific value that is required to customize the template."
- "More information can be found at: U(https://docs.openshift.com/container-platform/3.6/dev_guide/templates.html#writing-parameters)"
type: list
version:
description:
- Template structure version.
- This is parameter can be used only when kubevirt addon is installed on your openshift cluster.
type: str
extends_documentation_fragment:
- community.kubernetes.k8s_auth_options
- community.kubernetes.k8s_state_options
requirements:
- python >= 2.7
- openshift >= 0.8.2
'''
EXAMPLES = '''
- name: Create template 'mytemplate'
community.general.kubevirt_template:
state: present
name: myvmtemplate
namespace: templates
display_name: Generic cirros template
description: Basic cirros template
long_description: Verbose description of cirros template
provider_display_name: Just Be Cool, Inc.
documentation_url: http://theverycoolcompany.com
support_url: http://support.theverycoolcompany.com
icon_class: icon-linux
default_disk:
disk:
bus: virtio
default_nic:
model: virtio
default_network:
resource:
resourceName: bridge.network.kubevirt.io/cnvmgmt
default_volume:
containerDisk:
image: kubevirt/cirros-container-disk-demo:latest
objects:
- name: ${NAME}
kind: VirtualMachine
memory: ${MEMORY_SIZE}
state: present
namespace: vms
parameters:
- name: NAME
description: VM name
generate: expression
from: 'vm-[A-Za-z0-9]{8}'
- name: MEMORY_SIZE
description: Memory size
value: 1Gi
- name: Remove template 'myvmtemplate'
community.general.kubevirt_template:
state: absent
name: myvmtemplate
namespace: templates
'''
RETURN = '''
kubevirt_template:
description:
- The template dictionary specification returned by the API.
returned: success
type: complex
contains: {}
'''
import copy
import traceback
from ansible_collections.community.kubernetes.plugins.module_utils.common import AUTH_ARG_SPEC
from ansible_collections.community.general.plugins.module_utils.kubevirt import (
virtdict,
KubeVirtRawModule,
API_GROUP,
MAX_SUPPORTED_API_VERSION
)
TEMPLATE_ARG_SPEC = {
'name': {'required': True},
'namespace': {'required': True},
'state': {
'default': 'present',
'choices': ['present', 'absent'],
},
'force': {
'type': 'bool',
'default': False,
},
'merge_type': {
'type': 'list',
'choices': ['json', 'merge', 'strategic-merge']
},
'objects': {
'type': 'list',
},
'display_name': {
'type': 'str',
},
'description': {
'type': 'str',
},
'long_description': {
'type': 'str',
},
'provider_display_name': {
'type': 'str',
},
'documentation_url': {
'type': 'str',
},
'support_url': {
'type': 'str',
},
'icon_class': {
'type': 'str',
},
'version': {
'type': 'str',
},
'editable': {
'type': 'list',
},
'default_disk': {
'type': 'dict',
},
'default_volume': {
'type': 'dict',
},
'default_network': {
'type': 'dict',
},
'default_nic': {
'type': 'dict',
},
'parameters': {
'type': 'list',
},
}
class KubeVirtVMTemplate(KubeVirtRawModule):
@property
def argspec(self):
""" argspec property builder """
argument_spec = copy.deepcopy(AUTH_ARG_SPEC)
argument_spec.update(TEMPLATE_ARG_SPEC)
return argument_spec
def execute_module(self):
# Parse parameters specific for this module:
definition = virtdict()
# Execute the CRUD of VM template:
kind = 'Template'
template_api_version = 'template.openshift.io/v1'
# Fill in template parameters:
definition['parameters'] = self.params.get('parameters')
# Fill in the default Label
labels = definition['metadata']['labels']
labels['template.cnv.io/type'] = 'vm'
# Fill in Openshift/Kubevirt template annotations:
annotations = definition['metadata']['annotations']
if self.params.get('display_name'):
annotations['openshift.io/display-name'] = self.params.get('display_name')
if self.params.get('description'):
annotations['description'] = self.params.get('description')
if self.params.get('long_description'):
annotations['openshift.io/long-description'] = self.params.get('long_description')
if self.params.get('provider_display_name'):
annotations['openshift.io/provider-display-name'] = self.params.get('provider_display_name')
if self.params.get('documentation_url'):
annotations['openshift.io/documentation-url'] = self.params.get('documentation_url')
if self.params.get('support_url'):
annotations['openshift.io/support-url'] = self.params.get('support_url')
if self.params.get('icon_class'):
annotations['iconClass'] = self.params.get('icon_class')
if self.params.get('version'):
annotations['template.cnv.io/version'] = self.params.get('version')
# TODO: Make it more Ansiblish, so user don't have to specify API JSON path, but rather Ansible params:
if self.params.get('editable'):
annotations['template.cnv.io/editable'] = self.params.get('editable')
# Set defaults annotations:
if self.params.get('default_disk'):
annotations['defaults.template.cnv.io/disk'] = self.params.get('default_disk').get('name')
if self.params.get('default_volume'):
annotations['defaults.template.cnv.io/volume'] = self.params.get('default_volume').get('name')
if self.params.get('default_nic'):
annotations['defaults.template.cnv.io/nic'] = self.params.get('default_nic').get('name')
if self.params.get('default_network'):
annotations['defaults.template.cnv.io/network'] = self.params.get('default_network').get('name')
# Process objects:
self.client = self.get_api_client()
definition['objects'] = []
objects = self.params.get('objects') or []
for obj in objects:
if obj['kind'] != 'VirtualMachine':
definition['objects'].append(obj)
else:
vm_definition = virtdict()
# Set VM defaults:
if self.params.get('default_disk'):
vm_definition['spec']['template']['spec']['domain']['devices']['disks'] = [self.params.get('default_disk')]
if self.params.get('default_volume'):
vm_definition['spec']['template']['spec']['volumes'] = [self.params.get('default_volume')]
if self.params.get('default_nic'):
vm_definition['spec']['template']['spec']['domain']['devices']['interfaces'] = [self.params.get('default_nic')]
if self.params.get('default_network'):
vm_definition['spec']['template']['spec']['networks'] = [self.params.get('default_network')]
# Set kubevirt API version:
vm_definition['apiVersion'] = '%s/%s' % (API_GROUP, MAX_SUPPORTED_API_VERSION)
# Construct k8s vm API object:
vm_template = vm_definition['spec']['template']
dummy, vm_def = self.construct_vm_template_definition('VirtualMachine', vm_definition, vm_template, obj)
definition['objects'].append(vm_def)
# Create template:
resource = self.client.resources.get(api_version=template_api_version, kind=kind, name='templates')
definition = self.set_defaults(resource, definition)
result = self.perform_action(resource, definition)
# Return from the module:
self.exit_json(**{
'changed': result['changed'],
'kubevirt_template': result.pop('result'),
'result': result,
})
def main():
module = KubeVirtVMTemplate()
try:
module.execute_module()
except Exception as e:
module.fail_json(msg=str(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()

View file

@ -1,469 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: kubevirt_vm
short_description: Manage KubeVirt virtual machine
description:
- Use Openshift Python SDK to manage the state of KubeVirt virtual machines.
author: KubeVirt Team (@kubevirt)
options:
state:
description:
- Set the virtual machine to either I(present), I(absent), I(running) or I(stopped).
- "I(present) - Create or update a virtual machine. (And run it if it's ephemeral.)"
- "I(absent) - Remove a virtual machine."
- "I(running) - Create or update a virtual machine and run it."
- "I(stopped) - Stop a virtual machine. (This deletes ephemeral VMs.)"
default: "present"
choices:
- present
- absent
- running
- stopped
type: str
name:
description:
- Name of the virtual machine.
required: true
type: str
namespace:
description:
- Namespace where the virtual machine exists.
required: true
type: str
ephemeral:
description:
- If (true) ephemeral virtual machine will be created. When destroyed it won't be accessible again.
- Works only with C(state) I(present) and I(absent).
type: bool
default: false
datavolumes:
description:
- "DataVolumes are a way to automate importing virtual machine disks onto pvcs during the virtual machine's
launch flow. Without using a DataVolume, users have to prepare a pvc with a disk image before assigning
it to a VM or VMI manifest. With a DataVolume, both the pvc creation and import is automated on behalf of the user."
type: list
template:
description:
- "Name of Template to be used in creation of a virtual machine."
type: str
template_parameters:
description:
- "New values of parameters from Template."
type: dict
extends_documentation_fragment:
- community.kubernetes.k8s_auth_options
- community.general.kubevirt_vm_options
- community.general.kubevirt_common_options
requirements:
- python >= 2.7
- openshift >= 0.8.2
'''
EXAMPLES = '''
- name: Start virtual machine 'myvm'
community.general.kubevirt_vm:
state: running
name: myvm
namespace: vms
- name: Create virtual machine 'myvm' and start it
community.general.kubevirt_vm:
state: running
name: myvm
namespace: vms
memory: 64Mi
cpu_cores: 1
bootloader: efi
smbios_uuid: 5d307ca9-b3ef-428c-8861-06e72d69f223
cpu_model: Conroe
headless: true
hugepage_size: 2Mi
tablets:
- bus: virtio
name: tablet1
cpu_limit: 3
cpu_shares: 2
disks:
- name: containerdisk
volume:
containerDisk:
image: kubevirt/cirros-container-disk-demo:latest
path: /custom-disk/cirros.img
disk:
bus: virtio
- name: Create virtual machine 'myvm' with multus network interface
community.general.kubevirt_vm:
name: myvm
namespace: vms
memory: 512M
interfaces:
- name: default
bridge: {}
network:
pod: {}
- name: mynet
bridge: {}
network:
multus:
networkName: mynetconf
- name: Combine inline definition with Ansible parameters
community.general.kubevirt_vm:
# Kubernetes specification:
definition:
metadata:
labels:
app: galaxy
service: web
origin: vmware
# Ansible parameters:
state: running
name: myvm
namespace: vms
memory: 64M
disks:
- name: containerdisk
volume:
containerDisk:
image: kubevirt/cirros-container-disk-demo:latest
path: /custom-disk/cirros.img
disk:
bus: virtio
- name: Start ephemeral virtual machine 'myvm' and wait to be running
community.general.kubevirt_vm:
ephemeral: true
state: running
wait: true
wait_timeout: 180
name: myvm
namespace: vms
memory: 64M
labels:
kubevirt.io/vm: myvm
disks:
- name: containerdisk
volume:
containerDisk:
image: kubevirt/cirros-container-disk-demo:latest
path: /custom-disk/cirros.img
disk:
bus: virtio
- name: Start fedora vm with cloud init
community.general.kubevirt_vm:
state: running
wait: true
name: myvm
namespace: vms
memory: 1024M
cloud_init_nocloud:
userData: |-
#cloud-config
password: fedora
chpasswd: { expire: False }
disks:
- name: containerdisk
volume:
containerDisk:
image: kubevirt/fedora-cloud-container-disk-demo:latest
path: /disk/fedora.qcow2
disk:
bus: virtio
node_affinity:
soft:
- weight: 1
term:
match_expressions:
- key: security
operator: In
values:
- S2
- name: Create virtual machine with datavolume and specify node affinity
community.general.kubevirt_vm:
name: myvm
namespace: default
memory: 1024Mi
datavolumes:
- name: mydv
source:
http:
url: https://url/disk.qcow2
pvc:
accessModes:
- ReadWriteOnce
storage: 5Gi
node_affinity:
hard:
- term:
match_expressions:
- key: security
operator: In
values:
- S1
- name: Remove virtual machine 'myvm'
community.general.kubevirt_vm:
state: absent
name: myvm
namespace: vms
'''
RETURN = '''
kubevirt_vm:
description:
- The virtual machine dictionary specification returned by the API.
- "This dictionary contains all values returned by the KubeVirt API all options
are described here U(https://kubevirt.io/api-reference/master/definitions.html#_v1_virtualmachine)"
returned: success
type: complex
contains: {}
'''
import copy
import traceback
from ansible_collections.community.kubernetes.plugins.module_utils.common import AUTH_ARG_SPEC
from ansible_collections.community.general.plugins.module_utils.kubevirt import (
virtdict,
KubeVirtRawModule,
VM_COMMON_ARG_SPEC,
VM_SPEC_DEF_ARG_SPEC
)
VM_ARG_SPEC = {
'ephemeral': {'type': 'bool', 'default': False},
'state': {
'type': 'str',
'choices': [
'present', 'absent', 'running', 'stopped'
],
'default': 'present'
},
'datavolumes': {'type': 'list'},
'template': {'type': 'str'},
'template_parameters': {'type': 'dict'},
}
# Which params (can) modify 'spec:' contents of a VM:
VM_SPEC_PARAMS = list(VM_SPEC_DEF_ARG_SPEC.keys()) + ['datavolumes', 'template', 'template_parameters']
class KubeVirtVM(KubeVirtRawModule):
@property
def argspec(self):
""" argspec property builder """
argument_spec = copy.deepcopy(AUTH_ARG_SPEC)
argument_spec.update(VM_COMMON_ARG_SPEC)
argument_spec.update(VM_ARG_SPEC)
return argument_spec
@staticmethod
def fix_serialization(obj):
if obj and hasattr(obj, 'to_dict'):
return obj.to_dict()
return obj
def _wait_for_vmi_running(self):
for event in self._kind_resource.watch(namespace=self.namespace, timeout=self.params.get('wait_timeout')):
entity = event['object']
if entity.metadata.name != self.name:
continue
status = entity.get('status', {})
phase = status.get('phase', None)
if phase == 'Running':
return entity
self.fail("Timeout occurred while waiting for virtual machine to start. Maybe try a higher wait_timeout value?")
def _wait_for_vm_state(self, new_state):
if new_state == 'running':
want_created = want_ready = True
else:
want_created = want_ready = False
for event in self._kind_resource.watch(namespace=self.namespace, timeout=self.params.get('wait_timeout')):
entity = event['object']
if entity.metadata.name != self.name:
continue
status = entity.get('status', {})
created = status.get('created', False)
ready = status.get('ready', False)
if (created, ready) == (want_created, want_ready):
return entity
self.fail("Timeout occurred while waiting for virtual machine to achieve '{0}' state. "
"Maybe try a higher wait_timeout value?".format(new_state))
def manage_vm_state(self, new_state, already_changed):
new_running = True if new_state == 'running' else False
changed = False
k8s_obj = {}
if not already_changed:
k8s_obj = self.get_resource(self._kind_resource)
if not k8s_obj:
self.fail("VirtualMachine object disappeared during module operation, aborting.")
if k8s_obj.spec.get('running', False) == new_running:
return False, k8s_obj
newdef = dict(metadata=dict(name=self.name, namespace=self.namespace), spec=dict(running=new_running))
k8s_obj, err = self.patch_resource(self._kind_resource, newdef, k8s_obj,
self.name, self.namespace, merge_type='merge')
if err:
self.fail_json(**err)
else:
changed = True
if self.params.get('wait'):
k8s_obj = self._wait_for_vm_state(new_state)
return changed, k8s_obj
def _process_template_defaults(self, proccess_template, processedtemplate, defaults):
def set_template_default(default_name, default_name_index, definition_spec):
default_value = proccess_template['metadata']['annotations'][default_name]
if default_value:
values = definition_spec[default_name_index]
default_values = [d for d in values if d.get('name') == default_value]
defaults[default_name_index] = default_values
if definition_spec[default_name_index] is None:
definition_spec[default_name_index] = []
definition_spec[default_name_index].extend([d for d in values if d.get('name') != default_value])
devices = processedtemplate['spec']['template']['spec']['domain']['devices']
spec = processedtemplate['spec']['template']['spec']
set_template_default('defaults.template.cnv.io/disk', 'disks', devices)
set_template_default('defaults.template.cnv.io/volume', 'volumes', spec)
set_template_default('defaults.template.cnv.io/nic', 'interfaces', devices)
set_template_default('defaults.template.cnv.io/network', 'networks', spec)
def construct_definition(self, kind, our_state, ephemeral):
definition = virtdict()
processedtemplate = {}
# Construct the API object definition:
defaults = {'disks': [], 'volumes': [], 'interfaces': [], 'networks': []}
vm_template = self.params.get('template')
if vm_template:
# Find the template the VM should be created from:
template_resource = self.client.resources.get(api_version='template.openshift.io/v1', kind='Template', name='templates')
proccess_template = template_resource.get(name=vm_template, namespace=self.params.get('namespace'))
# Set proper template values taken from module option 'template_parameters':
for k, v in self.params.get('template_parameters', {}).items():
for parameter in proccess_template.parameters:
if parameter.name == k:
parameter.value = v
# Proccess the template:
processedtemplates_res = self.client.resources.get(api_version='template.openshift.io/v1', kind='Template', name='processedtemplates')
processedtemplate = processedtemplates_res.create(proccess_template.to_dict()).to_dict()['objects'][0]
# Process defaults of the template:
self._process_template_defaults(proccess_template, processedtemplate, defaults)
if not ephemeral:
definition['spec']['running'] = our_state == 'running'
template = definition if ephemeral else definition['spec']['template']
template['metadata']['labels']['vm.cnv.io/name'] = self.params.get('name')
dummy, definition = self.construct_vm_definition(kind, definition, template, defaults)
return self.merge_dicts(definition, processedtemplate)
def execute_module(self):
# Parse parameters specific to this module:
ephemeral = self.params.get('ephemeral')
k8s_state = our_state = self.params.get('state')
kind = 'VirtualMachineInstance' if ephemeral else 'VirtualMachine'
_used_params = [name for name in self.params if self.params[name] is not None]
# Is 'spec:' getting changed?
vm_spec_change = True if set(VM_SPEC_PARAMS).intersection(_used_params) else False
changed = False
crud_executed = False
method = ''
# Underlying module_utils/k8s/* code knows only of state == present/absent; let's make sure not to confuse it
if ephemeral:
# Ephemerals don't actually support running/stopped; we treat those as aliases for present/absent instead
if our_state == 'running':
self.params['state'] = k8s_state = 'present'
elif our_state == 'stopped':
self.params['state'] = k8s_state = 'absent'
else:
if our_state != 'absent':
self.params['state'] = k8s_state = 'present'
# Start with fetching the current object to make sure it exists
# If it does, but we end up not performing any operations on it, at least we'll be able to return
# its current contents as part of the final json
self.client = self.get_api_client()
self._kind_resource = self.find_supported_resource(kind)
k8s_obj = self.get_resource(self._kind_resource)
if not self.check_mode and not vm_spec_change and k8s_state != 'absent' and not k8s_obj:
self.fail("It's impossible to create an empty VM or change state of a non-existent VM.")
# If there are (potential) changes to `spec:` or we want to delete the object, that warrants a full CRUD
# Also check_mode always warrants a CRUD, as that'll produce a sane result
if vm_spec_change or k8s_state == 'absent' or self.check_mode:
definition = self.construct_definition(kind, our_state, ephemeral)
result = self.execute_crud(kind, definition)
changed = result['changed']
k8s_obj = result['result']
method = result['method']
crud_executed = True
if ephemeral and self.params.get('wait') and k8s_state == 'present' and not self.check_mode:
# Waiting for k8s_state==absent is handled inside execute_crud()
k8s_obj = self._wait_for_vmi_running()
if not ephemeral and our_state in ['running', 'stopped'] and not self.check_mode:
# State==present/absent doesn't involve any additional VMI state management and is fully
# handled inside execute_crud() (including wait logic)
patched, k8s_obj = self.manage_vm_state(our_state, crud_executed)
changed = changed or patched
if changed:
method = method or 'patch'
# Return from the module:
self.exit_json(**{
'changed': changed,
'kubevirt_vm': self.fix_serialization(k8s_obj),
'method': method
})
def main():
module = KubeVirtVM()
try:
module.execute_module()
except Exception as e:
module.fail_json(msg=str(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()

View file

@ -1 +0,0 @@
./cloud/kubevirt/kubevirt_cdi_upload.py

View file

@ -1 +0,0 @@
./cloud/kubevirt/kubevirt_preset.py

View file

@ -1 +0,0 @@
./cloud/kubevirt/kubevirt_pvc.py

View file

@ -1 +0,0 @@
./cloud/kubevirt/kubevirt_rs.py

View file

@ -1 +0,0 @@
./cloud/kubevirt/kubevirt_template.py

View file

@ -1 +0,0 @@
./cloud/kubevirt/kubevirt_vm.py

View file

@ -1 +0,0 @@
shippable/posix/group2

View file

@ -1 +0,0 @@
setuptools < 45 ; python_version <= '2.7' # setuptools 45 and later require python 3.5 or later

View file

@ -1,70 +0,0 @@
#!/usr/bin/env python
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import sys
def check_hosts(contrib, plugin):
contrib_hosts = sorted(contrib['_meta']['hostvars'].keys())
plugin_hosts = sorted(plugin['_meta']['hostvars'].keys())
assert contrib_hosts == plugin_hosts
return contrib_hosts, plugin_hosts
def check_groups(contrib, plugin):
contrib_groups = set(contrib.keys())
plugin_groups = set(plugin.keys())
missing_groups = contrib_groups.difference(plugin_groups)
if missing_groups:
print("groups: %s are missing from the plugin" % missing_groups)
assert not missing_groups
return contrib_groups, plugin_groups
def check_host_vars(key, value, plugin, host):
# tags are a dict in the plugin
if key.startswith('ec2_tag'):
print('assert tag', key, value)
assert 'tags' in plugin['_meta']['hostvars'][host], 'b file does not have tags in host'
btags = plugin['_meta']['hostvars'][host]['tags']
tagkey = key.replace('ec2_tag_', '')
assert tagkey in btags, '%s tag not in b file host tags' % tagkey
assert value == btags[tagkey], '%s != %s' % (value, btags[tagkey])
else:
print('assert var', key, value, key in plugin['_meta']['hostvars'][host], plugin['_meta']['hostvars'][host].get(key))
assert key in plugin['_meta']['hostvars'][host], "%s not in b's %s hostvars" % (key, host)
assert value == plugin['_meta']['hostvars'][host][key], "%s != %s" % (value, plugin['_meta']['hostvars'][host][key])
def main():
# a should be the source of truth (the script output)
a = sys.argv[1]
# b should be the thing to check (the plugin output)
b = sys.argv[2]
with open(a, 'r') as f:
adata = json.loads(f.read())
with open(b, 'r') as f:
bdata = json.loads(f.read())
print(adata)
print(bdata)
# all hosts should be present obviously
ahosts, bhosts = check_hosts(adata, bdata)
# all groups should be present obviously
agroups, bgroups = check_groups(adata, bdata)
# check host vars can be reconstructed
for ahost in ahosts:
contrib_host_vars = adata['_meta']['hostvars'][ahost]
for key, value in contrib_host_vars.items():
check_host_vars(key, value, bdata, ahost)
if __name__ == "__main__":
main()

View file

@ -1,80 +0,0 @@
#!/usr/bin/env bash
if [[ $(python --version 2>&1) =~ 2\.6 ]]
then
echo "Openshift client is not supported on Python 2.6"
exit 0
fi
set -eux
uname -a
if [[ $(uname -a) =~ FreeBSD\ 12\.0-RELEASE ]]
then
# On FreeBSD 12.0 images, upgrade setuptools to avoid error with multidict
# This is a bug in pip, which happens because the old setuptools from outside
# the venv leaks into the venv (https://github.com/pypa/pip/issues/6264).
# Since it is not fixed in latest pip (which is available in the venv), we
# need to upgrade setuptools outside the venv.
pip3 install --upgrade setuptools
fi
source virtualenv.sh
python --version
pip --version
pip show setuptools
pip install openshift -c constraints.txt
./server.py &
cleanup() {
kill -9 "$(jobs -p)"
}
trap cleanup INT TERM EXIT
# Fake auth file
mkdir -p ~/.kube/
cat <<EOF > ~/.kube/config
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: http://localhost:12345
name: development
contexts:
- context:
cluster: development
user: developer
name: dev-frontend
current-context: dev-frontend
kind: Config
preferences: {}
users:
- name: developer
user:
token: ZDNg7LzSlp8a0u0fht_tRnPMTOjxqgJGCyi_iy0ecUw
EOF
#################################################
# RUN THE PLUGIN
#################################################
# run the plugin second
export ANSIBLE_INVENTORY_ENABLED=community.general.kubevirt
export ANSIBLE_INVENTORY=test.kubevirt.yml
cat << EOF > "$OUTPUT_DIR/test.kubevirt.yml"
plugin: community.general.kubevirt
connections:
- namespaces:
- default
EOF
ANSIBLE_JINJA2_NATIVE=1 ansible-inventory -vvvv -i "$OUTPUT_DIR/test.kubevirt.yml" --list --output="$OUTPUT_DIR/plugin.out"
#################################################
# DIFF THE RESULTS
#################################################
./inventory_diff.py "$(pwd)/test.out" "$OUTPUT_DIR/plugin.out"

View file

@ -1,164 +0,0 @@
#!/usr/bin/env python
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import os
try:
from http.server import HTTPServer
from http.server import SimpleHTTPRequestHandler
except ImportError:
from BaseHTTPServer import HTTPServer
from SimpleHTTPServer import SimpleHTTPRequestHandler
from threading import Thread
try:
from urllib.parse import urlparse
except ImportError:
from urlparse import urlparse
class TestHandler(SimpleHTTPRequestHandler):
# Path handlers:
handlers = {}
def log_message(self, format, *args):
"""
Empty method, so we don't mix output of HTTP server with tests
"""
pass
def do_GET(self):
params = urlparse(self.path)
if params.path in self.handlers:
self.handlers[params.path](self)
else:
SimpleHTTPRequestHandler.do_GET(self)
def do_POST(self):
params = urlparse(self.path)
if params.path in self.handlers:
self.handlers[params.path](self)
else:
SimpleHTTPRequestHandler.do_POST(self)
class TestServer(object):
# The host and port and path used by the embedded tests web server:
PORT = None
# The embedded web server:
_httpd = None
# Thread for http server:
_thread = None
def set_json_response(self, path, code, body):
def _handle_request(handler):
handler.send_response(code)
handler.send_header('Content-Type', 'application/json')
handler.end_headers()
data = json.dumps(body, ensure_ascii=False).encode('utf-8')
handler.wfile.write(data)
TestHandler.handlers[path] = _handle_request
def start_server(self, host='localhost'):
self._httpd = HTTPServer((host, 12345), TestHandler)
self._thread = Thread(target=self._httpd.serve_forever)
self._thread.start()
def stop_server(self):
self._httpd.shutdown()
self._thread.join()
if __name__ == '__main__':
print(os.getpid())
server = TestServer()
server.start_server()
server.set_json_response(path="/version", code=200, body={})
server.set_json_response(path="/api", code=200, body={
"kind": "APIVersions", "versions": ["v1"], "serverAddressByClientCIDRs": [{"clientCIDR": "0.0.0.0/0", "serverAddress": "localhost:12345"}]
})
server.set_json_response(path="/api/v1", code=200, body={'resources': {}})
server.set_json_response(path="/apis", code=200, body={
"kind": "APIGroupList", "apiVersion": "v1",
"groups": [{
"name": "kubevirt.io", "versions": [{"groupVersion": "kubevirt.io/v1alpha3", "version": "v1alpha3"}],
"preferredVersion": {"groupVersion": "kubevirt.io/v1alpha3", "version": "v1alpha3"}
}]
})
server.set_json_response(
path="/apis/kubevirt.io/v1alpha3",
code=200,
body={
"kind": "APIResourceList", "apiVersion": "v1", "groupVersion": "kubevirt.io/v1alpha3",
"resources": [{
"name": "virtualmachineinstances", "singularName": "virtualmachineinstance",
"namespaced": True, "kind": "VirtualMachineInstance",
"verbs": ["delete", "deletecollection", "get", "list", "patch", "create", "update", "watch"],
"shortNames":["vmi", "vmis"]
}]
}
)
server.set_json_response(
path="/apis/kubevirt.io/v1alpha3/namespaces/default/virtualmachineinstances",
code=200,
body={'apiVersion': 'kubevirt.io/v1alpha3',
'items': [{'apiVersion': 'kubevirt.io/v1alpha3',
'kind': 'VirtualMachineInstance',
'metadata': {'annotations': {'ansible': '{"data1": "yes", "data2": "no"}'},
'creationTimestamp': '2019-04-05T14:17:02Z',
'generateName': 'myvm',
'generation': 1,
'labels': {'kubevirt.io/nodeName': 'localhost',
'label': 'x',
'vm.cnv.io/name': 'myvm'},
'name': 'myvm',
'namespace': 'default',
'ownerReferences': [{'apiVersion': 'kubevirt.io/v1alpha3',
'blockOwnerDeletion': True,
'controller': True,
'kind': 'VirtualMachine',
'name': 'myvm',
'uid': 'f78ebe62-5666-11e9-a214-0800279ffc6b'}],
'resourceVersion': '1614085',
'selfLink': '/apis/kubevirt.io/v1alpha3/namespaces/default/virtualmachineinstances/myvm',
'uid': '7ba1b196-57ad-11e9-9e2e-0800279ffc6b'},
'spec': {'domain': {'devices': {'disks': [{'disk': {'bus': 'virtio'},
'name': 'containerdisk'},
{'disk': {'bus': 'virtio'}, 'name': 'ansiblecloudinitdisk'}],
'interfaces': [{'bridge': {}, 'name': 'default'}]},
'firmware': {'uuid': 'cdf77e9e-871b-5acb-a707-80ef3d4b9849'},
'machine': {'type': ''},
'resources': {'requests': {'memory': '64M'}}},
'networks': [{'name': 'default', 'pod': {}}],
'volumes': [{'containerDisk': {'image': 'kubevirt/cirros-container-disk-demo:v0.11.0'},
'name': 'containerdisk'},
{'cloudInitNoCloud': {'userData': '#cloud-config\npassword: password\nchpasswd: { expire: False }'},
'name': 'ansiblecloudinitdisk'}]},
'status': {'conditions': [{'lastProbeTime': None,
'lastTransitionTime': None,
'status': 'True',
'type': 'LiveMigratable'},
{'lastProbeTime': None,
'lastTransitionTime': '2019-04-05T14:17:27Z',
'status': 'True',
'type': 'Ready'}],
'interfaces': [{'ipAddress': '172.17.0.19',
'mac': '02:42:ac:11:00:13',
'name': 'default'}],
'migrationMethod': 'BlockMigration',
'nodeName': 'localhost',
'phase': 'Running'}}],
'kind': 'VirtualMachineInstanceList',
'metadata': {'continue': '',
'resourceVersion': '1614862',
'selfLink': '/apis/kubevirt.io/v1alpha3/namespaces/default/virtualmachineinstances'}}
)

View file

@ -1,61 +0,0 @@
{
"_meta": {
"hostvars": {
"default-myvm-7ba1b196-57ad-11e9-9e2e-0800279ffc6b": {
"annotations": {
"ansible": "{\"data1\": \"yes\", \"data2\": \"no\"}"
},
"ansible_host": "172.17.0.19",
"data1": "yes",
"data2": "no",
"labels": {
"kubevirt.io/nodeName": "localhost",
"label": "x",
"vm.cnv.io/name": "myvm"
},
"object_type": "vm",
"resource_version": "1614085",
"uid": "7ba1b196-57ad-11e9-9e2e-0800279ffc6b"
}
}
},
"all": {
"children": [
"label_kubevirt_io_nodeName_localhost",
"label_label_x",
"label_vm_cnv_io_name_myvm",
"localhost_12345",
"ungrouped"
]
},
"label_kubevirt_io_nodeName_localhost": {
"hosts": [
"default-myvm-7ba1b196-57ad-11e9-9e2e-0800279ffc6b"
]
},
"label_label_x": {
"hosts": [
"default-myvm-7ba1b196-57ad-11e9-9e2e-0800279ffc6b"
]
},
"label_vm_cnv_io_name_myvm": {
"hosts": [
"default-myvm-7ba1b196-57ad-11e9-9e2e-0800279ffc6b"
]
},
"localhost_12345": {
"children": [
"namespace_default"
]
},
"namespace_default": {
"children": [
"namespace_default_vms"
]
},
"namespace_default_vms": {
"hosts": [
"default-myvm-7ba1b196-57ad-11e9-9e2e-0800279ffc6b"
]
}
}

View file

@ -13,26 +13,6 @@ plugins/modules/cloud/centurylink/clc_publicip.py validate-modules:parameter-lis
plugins/modules/cloud/centurylink/clc_server.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/centurylink/clc_server_snapshot.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/heroku/heroku_collaborator.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/kubevirt/kubevirt_cdi_upload.py validate-modules:doc-missing-type
plugins/modules/cloud/kubevirt/kubevirt_cdi_upload.py validate-modules:doc-required-mismatch
plugins/modules/cloud/kubevirt/kubevirt_cdi_upload.py validate-modules:mutually_exclusive-unknown
plugins/modules/cloud/kubevirt/kubevirt_cdi_upload.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/kubevirt/kubevirt_preset.py validate-modules:mutually_exclusive-unknown
plugins/modules/cloud/kubevirt/kubevirt_preset.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/kubevirt/kubevirt_preset.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/kubevirt/kubevirt_pvc.py validate-modules:mutually_exclusive-unknown
plugins/modules/cloud/kubevirt/kubevirt_pvc.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/kubevirt/kubevirt_pvc.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/kubevirt/kubevirt_pvc.py validate-modules:return-syntax-error
plugins/modules/cloud/kubevirt/kubevirt_rs.py validate-modules:doc-required-mismatch
plugins/modules/cloud/kubevirt/kubevirt_rs.py validate-modules:mutually_exclusive-unknown
plugins/modules/cloud/kubevirt/kubevirt_rs.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/kubevirt/kubevirt_rs.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/kubevirt/kubevirt_template.py validate-modules:mutually_exclusive-unknown
plugins/modules/cloud/kubevirt/kubevirt_template.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/kubevirt/kubevirt_vm.py validate-modules:mutually_exclusive-unknown
plugins/modules/cloud/kubevirt/kubevirt_vm.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/kubevirt/kubevirt_vm.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/linode/linode.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/linode/linode.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/linode/linode.py validate-modules:undocumented-parameter

View file

@ -13,26 +13,6 @@ plugins/modules/cloud/centurylink/clc_publicip.py validate-modules:parameter-lis
plugins/modules/cloud/centurylink/clc_server.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/centurylink/clc_server_snapshot.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/heroku/heroku_collaborator.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/kubevirt/kubevirt_cdi_upload.py validate-modules:doc-missing-type
plugins/modules/cloud/kubevirt/kubevirt_cdi_upload.py validate-modules:doc-required-mismatch
plugins/modules/cloud/kubevirt/kubevirt_cdi_upload.py validate-modules:mutually_exclusive-unknown
plugins/modules/cloud/kubevirt/kubevirt_cdi_upload.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/kubevirt/kubevirt_preset.py validate-modules:mutually_exclusive-unknown
plugins/modules/cloud/kubevirt/kubevirt_preset.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/kubevirt/kubevirt_preset.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/kubevirt/kubevirt_pvc.py validate-modules:mutually_exclusive-unknown
plugins/modules/cloud/kubevirt/kubevirt_pvc.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/kubevirt/kubevirt_pvc.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/kubevirt/kubevirt_pvc.py validate-modules:return-syntax-error
plugins/modules/cloud/kubevirt/kubevirt_rs.py validate-modules:doc-required-mismatch
plugins/modules/cloud/kubevirt/kubevirt_rs.py validate-modules:mutually_exclusive-unknown
plugins/modules/cloud/kubevirt/kubevirt_rs.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/kubevirt/kubevirt_rs.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/kubevirt/kubevirt_template.py validate-modules:mutually_exclusive-unknown
plugins/modules/cloud/kubevirt/kubevirt_template.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/kubevirt/kubevirt_vm.py validate-modules:mutually_exclusive-unknown
plugins/modules/cloud/kubevirt/kubevirt_vm.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/kubevirt/kubevirt_vm.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/linode/linode.py validate-modules:parameter-list-no-elements
plugins/modules/cloud/linode/linode.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/linode/linode.py validate-modules:undocumented-parameter

View file

@ -4,11 +4,6 @@ plugins/module_utils/compat/ipaddress.py no-assert
plugins/module_utils/compat/ipaddress.py no-unicode-literals
plugins/module_utils/_mount.py future-import-boilerplate
plugins/module_utils/_mount.py metaclass-boilerplate
plugins/modules/cloud/kubevirt/kubevirt_cdi_upload.py validate-modules:doc-missing-type
plugins/modules/cloud/kubevirt/kubevirt_preset.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/kubevirt/kubevirt_pvc.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/kubevirt/kubevirt_rs.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/kubevirt/kubevirt_vm.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/linode/linode.py validate-modules:parameter-type-not-in-doc
plugins/modules/cloud/linode/linode.py validate-modules:undocumented-parameter
plugins/modules/cloud/lxc/lxc_container.py pylint:blacklisted-name

View file

@ -1,56 +0,0 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import pytest
from ansible_collections.community.general.plugins.module_utils import kubevirt as mymodule
def test_simple_merge_dicts():
dict1 = {'labels': {'label1': 'value'}}
dict2 = {'labels': {'label2': 'value'}}
dict3 = json.dumps({'labels': {'label1': 'value', 'label2': 'value'}}, sort_keys=True)
assert dict3 == json.dumps(dict(mymodule.KubeVirtRawModule.merge_dicts(dict1, dict2)), sort_keys=True)
def test_simple_multi_merge_dicts():
dict1 = {'labels': {'label1': 'value', 'label3': 'value'}}
dict2 = {'labels': {'label2': 'value'}}
dict3 = json.dumps({'labels': {'label1': 'value', 'label2': 'value', 'label3': 'value'}}, sort_keys=True)
assert dict3 == json.dumps(dict(mymodule.KubeVirtRawModule.merge_dicts(dict1, dict2)), sort_keys=True)
def test_double_nested_merge_dicts():
dict1 = {'metadata': {'labels': {'label1': 'value', 'label3': 'value'}}}
dict2 = {'metadata': {'labels': {'label2': 'value'}}}
dict3 = json.dumps({'metadata': {'labels': {'label1': 'value', 'label2': 'value', 'label3': 'value'}}}, sort_keys=True)
assert dict3 == json.dumps(dict(mymodule.KubeVirtRawModule.merge_dicts(dict1, dict2)), sort_keys=True)
@pytest.mark.parametrize("lval, operations, rval, result", [
('v1', ['<', '<='], 'v2', True),
('v1', ['>', '>=', '=='], 'v2', False),
('v1', ['>'], 'v1alpha1', True),
('v1', ['==', '<', '<='], 'v1alpha1', False),
('v1beta5', ['==', '<=', '>='], 'v1beta5', True),
('v1beta5', ['<', '>', '!='], 'v1beta5', False),
])
def test_kubeapiversion_comparisons(lval, operations, rval, result):
KubeAPIVersion = mymodule.KubeAPIVersion
for op in operations:
test = '(KubeAPIVersion("{0}") {1} KubeAPIVersion("{2}")) == {3}'.format(lval, op, rval, result)
assert eval(test)
@pytest.mark.parametrize("ver", ('nope', 'v1delta7', '1.5', 'v1beta', 'v'))
def test_kubeapiversion_unsupported_versions(ver):
threw = False
try:
mymodule.KubeAPIVersion(ver)
except ValueError:
threw = True
assert threw

View file

@ -1,74 +0,0 @@
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import pytest
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.tests.unit.compat.mock import MagicMock
from ansible_collections.community.kubernetes.plugins.module_utils.common import K8sAnsibleMixin
from ansible_collections.community.kubernetes.plugins.module_utils.raw import KubernetesRawModule
from ansible_collections.community.general.plugins.module_utils.kubevirt import KubeVirtRawModule
import openshift.dynamic
RESOURCE_DEFAULT_ARGS = {'api_version': 'v1alpha3', 'group': 'kubevirt.io',
'prefix': 'apis', 'namespaced': True}
class AnsibleExitJson(Exception):
"""Exception class to be raised by module.exit_json and caught
by the test case"""
def __init__(self, **kwargs):
for k in kwargs:
setattr(self, k, kwargs[k])
def __getitem__(self, attr):
return getattr(self, attr)
class AnsibleFailJson(Exception):
"""Exception class to be raised by module.fail_json and caught
by the test case"""
def __init__(self, **kwargs):
for k in kwargs:
setattr(self, k, kwargs[k])
def __getitem__(self, attr):
return getattr(self, attr)
def exit_json(*args, **kwargs):
kwargs['success'] = True
if 'changed' not in kwargs:
kwargs['changed'] = False
raise AnsibleExitJson(**kwargs)
def fail_json(*args, **kwargs):
kwargs['success'] = False
raise AnsibleFailJson(**kwargs)
@pytest.fixture()
def base_fixture(monkeypatch):
monkeypatch.setattr(
AnsibleModule, "exit_json", exit_json)
monkeypatch.setattr(
AnsibleModule, "fail_json", fail_json)
# Create mock methods in Resource directly, otherwise dyn client
# tries binding those to corresponding methods in DynamicClient
# (with partial()), which is more problematic to intercept
openshift.dynamic.Resource.get = MagicMock()
openshift.dynamic.Resource.create = MagicMock()
openshift.dynamic.Resource.delete = MagicMock()
openshift.dynamic.Resource.patch = MagicMock()
openshift.dynamic.Resource.search = MagicMock()
openshift.dynamic.Resource.watch = MagicMock()
# Globally mock some methods, since all tests will use this
KubernetesRawModule.patch_resource = MagicMock()
KubernetesRawModule.patch_resource.return_value = ({}, None)
K8sAnsibleMixin.get_api_client = MagicMock()
K8sAnsibleMixin.get_api_client.return_value = None
K8sAnsibleMixin.find_resource = MagicMock()
KubeVirtRawModule.find_supported_resource = MagicMock()

View file

@ -1,80 +0,0 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import pytest
openshiftdynamic = pytest.importorskip("openshift.dynamic")
from ansible_collections.community.general.tests.unit.plugins.modules.utils import set_module_args
from .kubevirt_fixtures import base_fixture, RESOURCE_DEFAULT_ARGS, AnsibleExitJson
from ansible_collections.community.kubernetes.plugins.module_utils.raw import KubernetesRawModule
from ansible_collections.community.general.plugins.modules.cloud.kubevirt import kubevirt_rs as mymodule
KIND = 'VirtualMachineInstanceReplicaSet'
@pytest.mark.usefixtures("base_fixture")
@pytest.mark.parametrize("_replicas, _changed", ((1, True),
(3, True),
(2, False),
(5, True),))
def test_scale_rs_nowait(_replicas, _changed):
_name = 'test-rs'
# Desired state:
args = dict(name=_name, namespace='vms', replicas=_replicas, wait=False)
set_module_args(args)
# Mock pre-change state:
resource_args = dict(kind=KIND, **RESOURCE_DEFAULT_ARGS)
mymodule.KubeVirtVMIRS.find_supported_resource.return_value = openshiftdynamic.Resource(**resource_args)
res_inst = openshiftdynamic.ResourceInstance('', dict(kind=KIND, metadata={'name': _name}, spec={'replicas': 2}))
openshiftdynamic.Resource.get.return_value = res_inst
openshiftdynamic.Resource.search.return_value = [res_inst]
# Final state, after patching the object
KubernetesRawModule.patch_resource.return_value = dict(kind=KIND, metadata={'name': _name},
spec={'replicas': _replicas}), None
# Run code:
with pytest.raises(AnsibleExitJson) as result:
mymodule.KubeVirtVMIRS().execute_module()
# Verify result:
assert result.value['changed'] == _changed
@pytest.mark.usefixtures("base_fixture")
@pytest.mark.parametrize("_replicas, _success", ((1, False),
(2, False),
(5, True),))
def test_scale_rs_wait(_replicas, _success):
_name = 'test-rs'
# Desired state:
args = dict(name=_name, namespace='vms', replicas=5, wait=True)
set_module_args(args)
# Mock pre-change state:
resource_args = dict(kind=KIND, **RESOURCE_DEFAULT_ARGS)
mymodule.KubeVirtVMIRS.find_supported_resource.return_value = openshiftdynamic.Resource(**resource_args)
res_inst = openshiftdynamic.ResourceInstance('', dict(kind=KIND, metadata={'name': _name}, spec={'replicas': 2}))
openshiftdynamic.Resource.get.return_value = res_inst
openshiftdynamic.Resource.search.return_value = [res_inst]
# ~Final state, after patching the object (`replicas` match desired state)
KubernetesRawModule.patch_resource.return_value = dict(kind=KIND, name=_name, metadata={'name': _name},
spec={'replicas': 5}), None
# Final final state, as returned by resource.watch()
final_obj = dict(metadata=dict(name=_name), status=dict(readyReplicas=_replicas), **resource_args)
event = openshiftdynamic.ResourceInstance(None, final_obj)
openshiftdynamic.Resource.watch.return_value = [dict(object=event)]
# Run code:
with pytest.raises(Exception) as result:
mymodule.KubeVirtVMIRS().execute_module()
# Verify result:
assert result.value['success'] == _success

View file

@ -1,115 +0,0 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import pytest
openshiftdynamic = pytest.importorskip("openshift.dynamic")
from ansible_collections.community.general.tests.unit.plugins.modules.utils import set_module_args
from .kubevirt_fixtures import base_fixture, RESOURCE_DEFAULT_ARGS, AnsibleExitJson
from ansible_collections.community.general.plugins.module_utils.kubevirt import KubeVirtRawModule
from ansible_collections.community.general.plugins.modules.cloud.kubevirt import kubevirt_vm as mymodule
KIND = 'VirtulMachine'
@pytest.mark.usefixtures("base_fixture")
def test_create_vm_with_multus_nowait():
# Desired state:
args = dict(
state='present', name='testvm',
namespace='vms',
interfaces=[
{'bridge': {}, 'name': 'default', 'network': {'pod': {}}},
{'bridge': {}, 'name': 'mynet', 'network': {'multus': {'networkName': 'mynet'}}},
],
wait=False,
)
set_module_args(args)
# State as "returned" by the "k8s cluster":
resource_args = dict(kind=KIND, **RESOURCE_DEFAULT_ARGS)
KubeVirtRawModule.find_supported_resource.return_value = openshiftdynamic.Resource(**resource_args)
openshiftdynamic.Resource.get.return_value = None # Object doesn't exist in the cluster
# Run code:
with pytest.raises(AnsibleExitJson) as result:
mymodule.KubeVirtVM().execute_module()
# Verify result:
assert result.value['changed']
assert result.value['method'] == 'create'
@pytest.mark.usefixtures("base_fixture")
@pytest.mark.parametrize("_wait", (False, True))
def test_vm_is_absent(_wait):
# Desired state:
args = dict(
state='absent', name='testvmi',
namespace='vms',
wait=_wait,
)
set_module_args(args)
# State as "returned" by the "k8s cluster":
resource_args = dict(kind=KIND, **RESOURCE_DEFAULT_ARGS)
KubeVirtRawModule.find_supported_resource.return_value = openshiftdynamic.Resource(**resource_args)
openshiftdynamic.Resource.get.return_value = None # Object doesn't exist in the cluster
# Run code:
with pytest.raises(AnsibleExitJson) as result:
mymodule.KubeVirtVM().execute_module()
# Verify result:
assert not result.value['kubevirt_vm']
assert result.value['method'] == 'delete'
# Note: nothing actually gets deleted, as we mock that there's not object in the cluster present,
# so if the method changes to something other than 'delete' at some point, that's fine
@pytest.mark.usefixtures("base_fixture")
def test_vmpreset_create():
KIND = 'VirtulMachineInstancePreset'
# Desired state:
args = dict(state='present', name='testvmipreset', namespace='vms', memory='1024Mi', wait=False)
set_module_args(args)
# State as "returned" by the "k8s cluster":
resource_args = dict(kind=KIND, **RESOURCE_DEFAULT_ARGS)
KubeVirtRawModule.find_supported_resource.return_value = openshiftdynamic.Resource(**resource_args)
openshiftdynamic.Resource.get.return_value = None # Object doesn't exist in the cluster
# Run code:
with pytest.raises(AnsibleExitJson) as result:
mymodule.KubeVirtVM().execute_module()
# Verify result:
assert result.value['changed']
assert result.value['method'] == 'create'
@pytest.mark.usefixtures("base_fixture")
def test_vmpreset_is_absent():
KIND = 'VirtulMachineInstancePreset'
# Desired state:
args = dict(state='absent', name='testvmipreset', namespace='vms')
set_module_args(args)
# State as "returned" by the "k8s cluster":
resource_args = dict(kind=KIND, **RESOURCE_DEFAULT_ARGS)
KubeVirtRawModule.find_supported_resource.return_value = openshiftdynamic.Resource(**resource_args)
openshiftdynamic.Resource.get.return_value = None # Object doesn't exist in the cluster
# Run code:
with pytest.raises(AnsibleExitJson) as result:
mymodule.KubeVirtVM().execute_module()
# Verify result:
assert not result.value['kubevirt_vm']
assert result.value['method'] == 'delete'
# Note: nothing actually gets deleted, as we mock that there's not object in the cluster present,
# so if the method changes to something other than 'delete' at some point, that's fine

View file

@ -15,9 +15,6 @@ linode_api4 ; python_version > '2.6' # APIv4
python-gitlab < 2.3.0 # version 2.3.0 makes gitlab_runner tests fail
httmock
# requirement for kubevirt modules
openshift ; python_version >= '2.7'
# requirement for maven_artifact module
lxml
semantic_version