* Module DOCUMENTATION should match argspec
Large update of many modules so that DOCUMENTATION option name and
aliases match those defined in the argspec.
Issues identified by https://github.com/ansible/ansible/pull/34809
In addition to many typos and missing aliases, the following notable
changes were made:
* Create `module_docs_fragments/url.py` for `url_argument_spec`
* `dellos*_command` shouldn't have ever had `waitfor` (was incorrectly copied)
* `ce_aaa_server_host.py` `s/raduis_server_type/radius_server_type/g`
* `Junos_lldp` enable should be part of `state`.
Fixes # 34917
* Remove spaces from in between interface name
* Convert interface name to lower case as interface name
is case insensitive wrt configuring on remote device.
* Add VnicProfileMapping to register VM
Add vnic profile mappings to be supported in vm registration
* Add VnicProfileMapping to register template
Add vnic profile mappings to be supported in template registration
* Add reassign bad macs to register VM
Add reassign bad macs to be supported in vm registration.
* Add additional mappings params for VM registration
As part of the effort to support DR with oVirt
the "Register" operation is being added with a new mapping parameter
that describes the configuration of the registration.
The idea of supporting DR site to site in oVirt is to have 2 active
setups using storage replication between the primary setup and the
secondary setup.
Both setups will have active DCs, clusters, and hosts, although those
will not be identical.
The user can define a mapping which will be used to recover its setup.
Each mapping can be used to map any VM's attribute stored in the OVF
with its correlated entity.
For example, there could be a primary setup with a VM configured on cluster A.
We also keep an active secondary setup which only have cluster B.
Cluster B is compatible for that VM and in case of a DR scenario theoretically
the storage domain can be imported to the secondary setup and the use can
register the VM to cluster B.
In that case, we can automate the recovery process by defining a cluster mapping,
so once the entity will be registered its OVF will indicate it belongs to
cluster A but the mapping which will be sent will indicate that cluster B should
be valid for every thing that is configured on cluster A.
The engine should do the switch, and register the VM to cluster B in the secondary site.
Cluster mapping is just one example.
The following list describes the different mappings which were
introduced:
LUN mapping
Role mapping
Permissions mapping
Affinity group mapping
Affinity label mapping
Each mapping will be used for its specific OVF's data once the register operation
will take place in the engine.
* Add additional mappings params for Template registration
As part of the effort to support DR with oVirt
the "Register" operation is being added with a new mapping parameter
that describes the configuration of the registration.
The idea of supporting DR site to site in oVirt is to have 2 active
setups using storage replication between the primary setup and the
secondary setup.
Both setups will have active DCs, clusters, and hosts, although those
will not be identical.
The user can define a mapping which will be used to recover its setup.
Each mapping can be used to map any Template's attribute stored in the OVF
with its correlated entity.
For example, there could be a primary setup with a Template configured on cluster A.
We also keep an active secondary setup which only have cluster B.
Cluster B is compatible for that Template and in case of a DR scenario theoretically
the storage domain can be imported to the secondary setup and the use can
register the Template to cluster B.
In that case, we can automate the recovery process by defining a cluster mapping,
so once the entity will be registered its OVF will indicate it belongs to
cluster A but the mapping which will be sent will indicate that cluster B should
be valid for every thing that is configured on cluster A.
The engine should do the switch, and register the Template to cluster B in the
secondary site.
Cluster mapping is just one example.
The following list describes the different mappings which were
introduced:
Role mapping
Permissions mapping
Each mapping will be used for its specific OVF's data once the register operation
will take place in the engine.
* Add support for update OVF store
Add support for task of update OVF store in a storage domain.
* allow shells to have per host options, remote_tmp
added language to shell
removed module lang setting from general as plugins have it now
use get to avoid bad powershell plugin
more resilient tmp discovery, fall back to `pwd`
add shell to docs
fixed options for when frags are only options
added shell set ops in t_e and fixed option frags
normalize tmp dir usag4e
- pass tmpdir/tmp/temp options as env var to commands, making it default for tempfile
- adjusted ansiballz tmpdir
- default local tempfile usage to the configured local tmp
- set env temp in action
add options to powershell
shift temporary to internal envvar/params
ensure tempdir is set if we pass var
ensure basic and url use expected tempdir
ensure localhost uses local tmp
give /var/tmp priority, less perms issues
more consistent tempfile mgmt for ansiballz
made async_dir configurable
better action handling, allow for finally rm tmp
fixed tmp issue and no more tempdir in ballz
hostvarize world readable and admin users
always set shell tempdir
added comment to discourage use of exception/flow control
* Mostly revert expand_user as it's not quite working.
This was an additional feature anyhow.
Kept the use of pwd as a fallback but moved it to a second ssh
connection. This is not optimal but getting that to work in a single
ssh connection was part of the problem holding this up.
(cherry picked from commit 395b714120522f15e4c90a346f5e8e8d79213aca)
* fixed script and other action plugins
ensure tmpdir deletion
allow for connections that don't support new options (legacy, 3rd party)
fixed tests
When using the -c option, like "ansible-config -c ~/.ansible.cfg view"
with python 3, it fail with this error message:
ERROR! Unsupported configuration file extension for b'/home/misc/.ansible.cfg': .cfg
* port elb_classic_facts to boto3
update module to use AnsibleAWSModule
* Add RETURN docs for elb_classic_lb_facts
* Remove superfluous exception handling around connection
Fix exit_json call and RETURN docs
This fixes fact gathering of VMware guest machines with
older Linux Kernel versions. These older Kernels do not support /sys
filesystem which is used to gather virtualization related facts.
'dmidecode' is the safest option to find out virtualization related facts.
Fixes: #21573
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
* Cache tasks as they are queued instead of en masse
This also moves the task caching from the PlayIterator to the
StrategyBase class, where it makes more sense (and makes it easier
to not have to change the strategy class methods leading to an API
change).
Fixes#31673
* Cleaning up unit tests due to 502ca780
* Add validation for the next to last line of a module
* Fix last error code
* Reduce to a single conditional
* Fix conditionals
* Move the final warnings statement to main() in mysql_replication
oVirt modules support environment variables to be passed as
authentication details for connection. But ovirt_auth doesn't support
it. This patch add support for it.
Change cb1b705218 introduced the newline
parameter on network_cli plugin, but that was never introduced on cliconf.
This causes ios_user to break, since the newline value is never plumbed thru
to network_cli
* acl test: don't use restricted key
Fix this warning: [WARNING]: Removed restricted key from module data: ansible_user = ansible_user
* acl test: allow to execute twice in a row
* updates to azure_rm_sqlserver_facts
* updates to azure_rm_sqlserver_facts
* updates to azure_rm_sqlserver_facts
* updates to azure_rm_sqlserver_facts
* updates to azure_rm_sqlserver_facts
* updates to azure_rm_sqlserver_facts
* updates to azure_rm_sqlserver_facts
* updates to azure_rm_sqlserver_facts
* updates to azure_rm_sqlserver_facts
* updates to azure_rm_sqlserver_facts
Currently the ManageIQ API does not support azure_tenant_id but
this is in the process of being fixed. This commit changes
the recently-committed parameter to allow for the upcoming change.
provider
Provides a dynamic inventory plugin for Scaleway cloud provider with
the following features:
- Configurable scaleway.ini file
- Cache API responses
- Choose public or private IPs
- Create groups per Scaleway 'tags'
- Create groups per Scaleway regions
* Fix persistent command timeout handling
We were using play context timeout on ansible-connect to set
the persistent command timeout handler. Thus, we were ignoring
the persistent_command_timeout setting.
Moreover, even by changing that on ansible-connection, were again
overriding it on cliconf send_command, since in a same process we can
just set a single alarm and cliconf send_command alarm setup is executed
after ansible-connection alarm setup.
* Remove alarm setting on cliconf send_command
The alarm is set regardless before it is executed by ansible-connection.
Setting an alarm again, overrides/disables the previous ones as a single
process can just have a single alarm set.
* Move the setting of persistent command timeout to network_cli
We do also use ansible-connection for connection local, so if a
user provides a timeout via provider that would be ignored if we
set the value on ansible-connection.
Moving that logic to network_cli plugin constructor makes both
connections to work.
* Remove debug statements
* Set the persistent command timeout on task_executor
We can't set the timeout on ansible-connection nor network_cli,
otherwise tasks using provider timeout won't work.
This patch is primarily a refactor to make the validate-modules arg-spec
no longer generate a traceback. It additionally includes removal of deprecated
code in the virtual server module.
The main patch is to remove the traceback generating code. There are
other small fixes that were made in addition to doing that.
* Removed re-def of cleanup_tokens.
* Changed parameter args to be keywords.
* Changed imports to include new module_util locations.
* Imports also include developing (sideband) module_util locations.
* Changed to using F5Client and plain AnsibleModule to prevent tracebacks caused by missing libraries.
* Removed init and update methods from most Parameter classes (optimization) as its now included in module_utils.
* Changed module and module param references to take into account the new self.module arg.
* Minor bug fixes made during this refactor.