In cases where a CFN stack could not complete (due to lack of
permissions or similar) but also failed to roll back, the gathering of
stack resources would fail because successfully deleted items in the
rollback would no longer have a `PhysicalResourceId` property.
This PR fixes that by soft-failing when there's no physical ID
associated to a resource.
The Instance UUID(refered to as PersistenceUUID in the API) is a the ID
vcenter uses to idenify VMs.
My use case for this is that I configure Zabbix using ansible and its
vmware module relies on using these to identify VMs.
* Expose internal_network in os_floating_ip
Shade project has finally exposed this argument so now this module
matches old quantum_floatingip module's capabilities.
Use "nat_destination" term instead of "internal_network" to match shade
terminology.
* Add (private|internal)_network aliases to os_floating_ip
* Fix typo in os_floating_ip
There is a desire to not have this module always result in a change if a
password argument is supplied. The OpenStack API does not return a
password back when we get a user, so we've been assuming that if a
password argument was supplied, we should attempt to change the password
(even if nothing else is changing), and that results in a "changed"
state. Now we will only send along a password change attempt if the user
wants one (the default to match history).
Fixes#5217
* Add option for number parameter to generate manually provisioned clusters from a base name
* Refactor code to work with starting and stopped when number is specified
* Update docs
* Fix documentation error breaking Travis
* Fixes for async gce operations
* Fix documentation
* base_name from parameter to alias for name and fixes for renaming variables
* Fix breaking change on gce.py
* Fix bugs with name parameter
* Fix comments for Github build checks
* Add logic to set changed appropriately for cluster provisioning
Support the new native YAML format in the CloudFormation API. This means
the existing `template_format` parameter is deprecated. This commit also
adds a warning for the deprecated parameter.
the docker container module's `exposed_ports` was slightly ambigous.
Use the official Docker documentation to define what an `exposed port`
is.
Resolves: ansible/ansible-modules-core#5303
Signed-off-by: Daniel Andrei Minca <mandrei17@gmail.com>
- Don't rewrite the result; this is causing 'changed=true' on update
- Move AWSRetry import to top since it's a decorator, and is needed at definition-time
- removed star-imports, which wasn't possible in Ansible 1.x
- boto doesn't have any of the modern features (most notably, changesets), so this rewrite goes all-in on boto3.
- tags are updateable, at least in boto3. Fix documentation.
- staying with "ansible yaml to json conversion" because I'm trying to keep this scoped properly. The next PR will have AWS-native yaml support.
- documented the output. Tried to leave it backwards-compatible but the changes to 'events' might break someone's flow. However, the existing data wasn't terribly useful so I don't assume it will hurt.
- split up the code into functions. This should make unit testing possible.
- added forward-facing code: 'six' for iterating, started using AWSRetry, common tag conversion.
- add todo list
- Pass `exception` parameter to fail_json
The keys returned by user objects for default domain and
default project are respectively default_domain_id and
default_project_id.
We need to gather those IDs in case the user passed names, so we
can then compare with the user object on the needs_update helper
function.
dopy 0.3.7 makes use of six but doesn't list it as a requirement. This
means that people installing with pip won't get six installed, leading
to errors. Upstream released dopy-0.3.7a to address that but pip thinks
that is an alpha release. pip does not install alpha releases by
default so users aren't helped by that.
This change makes ansible emit a good error message in this case.
Fixes#4613
* Restart EC2 instances with multiple network interfaces
A previous bug, #3234, caused instances with multiple ENI's to fail when being
started or stopped because sourceDestCheck is a per-interface attribute, but we
use the boto global access to it (which only works when there's a single ENI).
This patch handles a variant of that bug that only surfaced when restarting an
instance, and catches the same type of exception.
* Default termination_protection to None instead of False
AWS defaults the value of termination_protection to False, so we don't
need to explicitly send `False` when the user hasn't specified a
termination protection level. Before this patch, the below pair of tasks
would:
1. Create an instance (enabling termination_protection)
2. Restart that instance (disabling termination_protection)
Now, the default None value would prevent the restart task from
disabling termination_protection.
```
- name: make an EC2 instance
ec2:
vpc_subnet_id: {{ subnet }}
instance_type: t2.micro
termination_protection: yes
exact_count: 1
count_tag:
Name: TestInstance
instance_tags:
Name: TestInstance
group_id: {{ group }}
image: ami-7172b611
wait: yes
- name: restart a protected EC2 instance
ec2:
vpc_subnet_id: {{ subnet }}
state: restarted
instance_tags:
Name: TestInstance
group_id: {{ group }}
image: ami-7172b611
wait: yes
```
Per #3877, the code to wait for spot instance requests to finish would
hang for the full wait time if any spot request failed for any reason.
This commit introduces status checks for spot requests, so if the
request fails, finishes, or is cancelled the task will fail/succeed
accordingly.
One edge case introduced here is tha if a user terminates the instance
associated with the request manually it won't fail the play, under the
presumption that the user *wants* the instance terminated.
A value for the project_id parameter to shade's create_network()
call was always being sent, even if no value for 'project' was
supplied. This was breaking folks with older versions of shade
(< 1.6).
Fixes PR https://github.com/ansible/ansible-modules-core/issues/3567
The AWS API requires that any termination policy list that includes
`Default` must end with Default. The attribute sorting caused any list
of attributes to be lexically sorted, so a list like
`["OldestLaunchConfiguration", "Default"]` would be changed to
`["Default", "OldestLaunchConfiguration"]` because default is earlier
alphabetically. This caused calls to fail with BotoServerError per #4069
This commit also adds proper tracebacks to all botoservererror fail_json
calls.
Closes#4069
Previously calculation of the number of instances that have been
terminated assumed all instances were in the first reservation returned
by AWS. If this is not the case the calculated number of instances
terminated never reaches the number of instances and the module always
times out. By unpacking the instances we get an accurate number and the
module correctly exits.
Currently instances with multiple ENI's can't be started or stopped
because sourceDestCheck is a per-interface attribute, but we use the
boto global access to it (which only works when there's a single ENI).
This patch handles multiple ENI's and applies the sourcedestcheck across
all interfaces the same way.
Fixes#3234
If you apply `wait=yes` and use `instance_tags` as your filter for
stopping/starting EC2 instances, this stack trace happens:
```
An exception occurred during task execution. The full traceback is: │~
Traceback (most recent call last): │~
File "/tmp/ryansb/ansible_FwE8VR/ansible_module_ec2.py", line 1540, in <module> │~
main() │~
File "/tmp/ryansb/ansible_FwE8VR/ansible_module_ec2.py", line 1514, in main │~
(changed, instance_dict_array, new_instance_ids) = startstop_instances(module, ec2, instance_ids, state, instance_tags) │~
File "/tmp/ryansb/ansible_FwE8VR/ansible_module_ec2.py", line 1343, in startstop_instances │~
if len(matched_instances) < len(instance_ids): │~
TypeError: object of type 'NoneType' has no len() │~
│~
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "ec2"}, "module_stderr": "Traceb│~
ack (most recent call last):\n File \"/tmp/ryansb/ansible_FwE8VR/ansible_module_ec2.py\", line 1540, in <module>\n main()\n File \"/tmp/│~
ryansb/ansible_FwE8VR/ansible_module_ec2.py\", line 1514, in main\n (changed, instance_dict_array, new_instance_ids) = startstop_instances│~
(module, ec2, instance_ids, state, instance_tags)\n File \"/tmp/ryansb/ansible_FwE8VR/ansible_module_ec2.py\", line 1343, in startstop_insta│~
nces\n if len(matched_instances) < len(instance_ids):\nTypeError: object of type 'NoneType' has no len()\n", "module_stdout": "", "msg": "│~
MODULE FAILURE", "parsed": false}
```
That's because the `instance_ids` variable is None if not supplied
in the task. That means the instances that result from the instance_tags
query aren't going to be included in the wait loop. To fix this, a list
needs to be kept of instances with matching tags and that list needs to
be added to `instance_ids` before the wait loop.
Before this, all spot instance requests would fail because the code
_always_ called module.fail_json when the parameter was set (which it
always was, because the module parameter's default was set to 'stop').
As the comment said, this parameter doesn't make sense for spot
instances at all, so the error message was also misleading.
Due to a mixup of the group/role/user and policy names, policies with
the same name as the group/role/user they are attached to would never be
updated after creation. To fix that, we needed two changes to the logic
of policy comparison:
- Compare the new policy name to *all* matching policies, not just the
first in lexicographical order
- Compare the new policy name to the matching ones, not to the IAM
object the policy is attached to
The ssh_public_keys must be a list otherwise will give the error:
"argument ssh_public_keys is of type <type 'dict'> and we were unable to convert to list"