1
0
Fork 0
mirror of https://github.com/ansible-collections/community.general.git synced 2024-09-14 20:13:21 +02:00

proposals were moved to ansible/proposals

This commit is contained in:
Brian Coca 2016-03-28 15:25:06 -04:00
parent 04610106a3
commit 0c92ec5e8f
13 changed files with 0 additions and 1784 deletions

View file

@ -1,150 +0,0 @@
# Auto Install Ansible roles
*Author*: Will Thames <@willthames>
*Date*: 19/02/2016
## Motivation
To use the latest (or even a specific) version of a playbook with the
appropriate roles, the following steps are typically required:
```
git pull upstream branch
ansible-galaxy install -r path/to/rolesfile.yml -p path/to/rolesdir -f
ansible-playbook run-the-playbook.yml
```
### Problems
- The most likely step in this process to be forgotten is the middle step. While we can improve processes and documentation to try and ensure that this step is not skipped, we can improve ansible-playbook so that the step is not required.
- Ansible-galaxy does ot sufficiently handle versioning.
- There is not a consistent format for specifying a role in a playbook or a dependent role in meta/main.yml.
## Approaches
### Approach 1: Specify rolesfile and rolesdir in playbook
Provide new `rolesdir` and `rolesfile` keywords:
```
- hosts: application-env
become: True
rolesfile: path/to/rolesfile.yml
rolesdir: path/to/rolesdir
roles:
- roleA
- { role: roleB, tags: role_roleB }
```
Running ansible-playbook against such a playbook would cause the roles listed in
`rolesfile` to be installed in `rolesdir`.
Add new configuration to allow default rolesfile, default rolesdir and
whether or not to auto update roles (defaulting to False)
#### Advantages
- Existing mechanism for roles management is maintained
- Playbooks are not polluted with roles 'meta' information (version, source)
#### Disadvantage
- Adds two new keywords
- Adds three new configuration variables for defaults
### Approach 2: Allow rolesfile inclusion
Allow the `roles` section to include a roles file:
```
- hosts: application-env
become: True
roles:
- include: path/to/rolesfile.yml
```
Running this playbook would cause the roles to be updated from the included
roles file.
This would also be functionally equivalent to specifying the roles file
content within the playbook:
```
- hosts: application-env
become: True
roles:
- src: https://git.example.com/roleA.git
scm: git
version: 0.1
- src: https://git.example.com/roleB.git
scm: git
version: 0.3
tags: role_roleB
```
#### Advantages
- The existing rolesfile mechanism is maintained
- Uses familiar inclusion mechanism
#### Disadvantage
- Separate playbooks would need separate rolesfiles. For example, a provision
playbook and upgrade playbook would likely have some overlap - currently
you can use the same rolesfile with ansible-galaxy so that the same
roles are available but only a subset of roles is used by the smaller
playbook.
- The roles file would need to be able to include playbook features such
as role tagging.
- New configuration defaults would likely still be required (and possibly
an override keyword for rolesdir and role auto update)
### Approach 3:
*Author*: chouseknecht<@chouseknecht>
*Date*: 24/02/2016
This is a combination of ideas taken from IRC, the ansible development group, and conversations at the recent contributor's summit. It also incorporates most of the ideas from Approach 1 (above) with two notable texceptions: 1) it elmintates maintaing a roles file (or what we think of today as requirements.yml); and 2) it does not include the definition of rolesdir in the playbook.
Here's the approach:
- Share the role install logic between ansible-playbook and ansible-galaxy so that ansible-playbook can resolve and install missing roles at playbook run time simply by evaluating the playbook.
- Ansible-galaxy installs or preloads roles also by examining a playbook.
- Deprecate support for requirements.yaml (the two points above make it unnecessary).
- Make ansible-playbook auto-downloading of roles configurable in ansible.cfg. In certain circumstance it may be desirable to disable auto-download.
- Provide one format for specifying a role whether in a playbook or in meta/main.yml. Suggested format:
```
{
'scm': 'git',
'src': 'http://git.example.com/repos/repo.git',
'version': 'v1.0',
'name': 'repo
}
```
- For roles installed from Galaxy, Galaxy should provide some measure of security against version change. Galaxy should track the commit related to a version. If the role owner changes historical versions (today tags) and thus changes the commit hash, the affected version would become un-installable.
- Refactor the install process to encompass the following :
- Idempotency - If a role version is already installed, dont attempt to install it again. If symlinks are present (see below), dont break or remove them.
- Provide a --force option that overrides idempotency.
- Install roles via tree-ish references, not just tags or commits (PR exists for this).
- Support a whitelist of role sources. Galaxy should not be automatically assumed to be part of the whitelist.
- Continue to be recursive, allowing roles to have dependencies specified in meta/main.yml.
- Continue to install roles in the roles_path.
- Use a symlink approach to managing role versions in the roles_path. Example:
```
roles/
briancoca.oracle_java7.v1.0
briancoca.oracle_java7.v2.2
briancoca.oracle_java7.qs3ih6x
briancoca.oracle_java7 => briancoca.oracle_java7.qs3ih6x
```
## Conclusion
Feedback is requested to improve any of the above approaches, or provide further approaches to solve this problem.

View file

@ -1,487 +0,0 @@
# Docker_Container Module Proposal
## Purpose and Scope:
The purpose of docker_container is to manage the lifecycle of a container. The module will provide a mechanism for
moving the container between absent, present, stopped and started states. It will focus purely on managing container
state. The intention of the narrow focus is to make understanding and using the module clear and keep maintenance
and testing as easy as possible.
Docker_container will manage a container using docker-py to communicate with either a local or remote API. It will
support API versions >= 1.14. API connection details will be handled externally in a shared utility module similar to
how other cloud modules operate.
The container world is moving rapidly, so the goal is to create a suite of docker modules that keep pace, with docker_container
leading the way. If this project is successful, it will naturally deprecate the existing docker module.
## Parameters:
Docker_container will accept the parameters listed below. An attempt has been made to represent all the options available to
docker's create, kill, pause, run, rm, start, stop and update commands.
Parameters for connecting to the API are not listed here. They are included in the common utility module mentioned above.
```
blkio_weight:
description:
- Block IO (relative weight), between 10 and 1000.
default: null
capabilities:
description:
- List of capabilities to add to the container.
default: null
command:
description:
- Command or list of commands to execute in the container when it starts.
default: null
cpu_period:
description:
- Limit CPU CFS (Completely Fair Scheduler) period
default: 0
cpu_quota:
description:
- Limit CPU CFS (Completely Fair Scheduler) quota
default: 0
cpuset_cpus:
description:
- CPUs in which to allow execution C(1,3) or C(1-3).
default: null
cpuset_mems:
description:
- Memory nodes (MEMs) in which to allow execution C(0-3) or C(0,1)
default: null
cpu_shares:
description:
- CPU shares (relative weight).
default: null
detach:
description:
- Enable detached mode to leave the container running in background.
If disabled, fail unless the process exits cleanly.
default: true
devices:
description:
- List of host device bindings to add to the container. Each binding is a mapping expressed
in the format: <path_on_host>:<path_in_container>:<cgroup_permissions>
default: null
dns_servers:
description:
- List of custom DNS servers.
default: null
dns_search_domains:
description:
- List of custom DNS search domains.
default: null
env:
description:
- Dictionary of key,value pairs.
default: null
entrypoint:
description:
- String or list of commands that overwrite the default ENTRYPOINT of the image.
default: null
etc_hosts:
description:
- Dict of host-to-IP mappings, where each host name is key in the dictionary. Hostname will be added to the
container's /etc/hosts file.
default: null
exposed_ports:
description:
- List of additional container ports to expose for port mappings or links.
If the port is already exposed using EXPOSE in a Dockerfile, it does not
need to be exposed again.
default: null
aliases:
- exposed
force_kill:
description:
- Use with absent, present, started and stopped states to use the kill command rather
than the stop command.
default: false
groups:
description:
- List of additional group names and/or IDs that the container process will run as.
default: null
hostname:
description:
- Container hostname.
default: null
image:
description:
- Container image used to create and match containers.
required: true
interactive:
description:
- Keep stdin open after a container is launched, even if not attached.
default: false
ipc_mode:
description:
- Set the IPC mode for the container. Can be one of
'container:<name|id>' to reuse another container's IPC namespace
or 'host' to use the host's IPC namespace within the container.
default: null
keep_volumes:
description:
- Retain volumes associated with a removed container.
default: false
kill_signal:
description:
- Override default signal used to kill a running container.
default null:
kernel_memory:
description:
- Kernel memory limit (format: <number>[<unit>]). Number is a positive integer.
Unit can be one of b, k, m, or g. Minimum is 4M.
default: 0
labels:
description:
- Dictionary of key value pairs.
default: null
links:
description:
- List of name aliases for linked containers in the format C(container_name:alias)
default: null
log_driver:
description:
- Specify the logging driver.
choices:
- json-file
- syslog
- journald
- gelf
- fluentd
- awslogs
- splunk
defult: json-file
log_options:
description:
- Dictionary of options specific to the chosen log_driver. See https://docs.docker.com/engine/admin/logging/overview/
for details.
required: false
default: null
mac_address:
description:
- Container MAC address (e.g. 92:d0:c6:0a:29:33)
default: null
memory:
description:
- Memory limit (format: <number>[<unit>]). Number is a positive integer.
Unit can be one of b, k, m, or g
default: 0
memory_reservation:
description:
- Memory soft limit (format: <number>[<unit>]). Number is a positive integer.
Unit can be one of b, k, m, or g
default: 0
memory_swap:
description:
- Total memory limit (memory + swap, format:<number>[<unit>]).
Number is a positive integer. Unit can be one of b, k, m, or g.
default: 0
memory_swappiness:
description:
- Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
default: 0
name:
description:
- Assign a name to a new container or match an existing container.
- When identifying an existing container name may be a name or a long or short container ID.
required: true
network_mode:
description:
- Connect the container to a network.
choices:
- bridge
- container:<name|id>
- host
- none
default: null
networks:
description:
- Dictionary of networks to which the container will be connected. The dictionary must have a name key (the name of the network).
Optional keys include: aliases (a list of container aliases), and links (a list of links in the format C(container_name:alias)).
default: null
oom_killer:
desription:
- Whether or not to disable OOM Killer for the container.
default: false
paused:
description:
- Use with the started state to pause running processes inside the container.
default: false
pid_mode:
description:
- Set the PID namespace mode for the container. Currenly only supports 'host'.
default: null
privileged:
description:
- Give extended privileges to the container.
default: false
published_ports:
description:
- List of ports to publish from the container to the host.
- Use docker CLI syntax: C(8000), C(9000:8000), or C(0.0.0.0:9000:8000), where 8000 is a
container port, 9000 is a host port, and 0.0.0.0 is a host interface.
- Container ports must be exposed either in the Dockerfile or via the C(expose) option.
- A value of ALL will publish all exposed container ports to random host ports, ignoring
any other mappings.
aliases:
- ports
read_only:
description:
- Mount the container's root file system as read-only.
default: false
recreate:
description:
- Use with present and started states to force the re-creation of an existing container.
default: false
restart:
description:
- Use with started state to force a matching container to be stopped and restarted.
default: false
restart_policy:
description:
- Container restart policy.
choices:
- on-failure
- always
default: on-failure
restart_retries:
description:
- Use with restart policy to control maximum number of restart attempts.
default: 0
shm_size:
description:
- Size of `/dev/shm`. The format is `<number><unit>`. `number` must be greater than `0`.
Unit is optional and can be `b` (bytes), `k` (kilobytes), `m` (megabytes), or `g` (gigabytes).
- Ommitting the unit defaults to bytes. If you omit the size entirely, the system uses `64m`.
default: null
security_opts:
description:
- List of security options in the form of C("label:user:User")
default: null
state:
description:
- "absent" - A container matching the specified name will be stopped and removed. Use force_kill to kill the container
rather than stopping it. Use keep_volumes to retain volumes associated with the removed container.
- "present" - Asserts the existence of a container matching the name and any provided configuration parameters. If no
container matches the name, a container will be created. If a container matches the name but the provided configuration
does not match, the container will be updated, if it can be. If it cannot be updated, it will be removed and re-created
with the requested config. Use recreate to force the re-creation of the matching container. Use force_kill to kill the
container rather than stopping it. Use keep_volumes to retain volumes associated with a removed container.
- "started" - Asserts there is a running container matching the name and any provided configuration. If no container
matches the name, a container will be created and started. If a container matching the name is found but the
configuration does not match, the container will be updated, if it can be. If it cannot be updated, it will be removed
and a new container will be created with the requested configuration and started. Use recreate to always re-create a
matching container, even if it is running. Use restart to force a matching container to be stopped and restarted. Use
force_kill to kill a container rather than stopping it. Use keep_volumes to retain volumes associated with a removed
container.
- "stopped" - a container matching the specified name will be stopped. Use force_kill to kill a container rather than
stopping it.
required: false
default: started
choices:
- absent
- present
- stopped
- started
stop_signal:
description:
- Override default signal used to stop the container.
default: null
stop_timeout:
description:
- Number of seconds to wait for the container to stop before sending SIGKILL.
required: false
trust_image_content:
description:
- If true, skip image verification.
default: false
tty:
description:
- Allocate a psuedo-TTY.
default: false
ulimits:
description:
- List of ulimit options. A ulimit is specified as C(nofile:262144:262144)
default: null
user:
description
- Sets the username or UID used and optionally the groupname or GID for the specified command.
- Can be [ user | user:group | uid | uid:gid | user:gid | uid:group ]
default: null
uts:
description:
- Set the UTS namespace mode for the container.
default: null
volumes:
description:
- List of volumes to mount within the container.
- 'Use docker CLI-style syntax: C(/host:/container[:mode])'
- You can specify a read mode for the mount with either C(ro) or C(rw).
- SELinux hosts can additionally use C(z) or C(Z) to use a shared or
private label for the volume.
default: null
volume_driver:
description:
- The container's volume driver.
default: none
volumes_from:
description:
- List of container names or Ids to get volumes from.
default: null
```
## Examples:
```
- name: Create a data container
docker_container:
name: mydata
image: busybox
volumes:
- /data
- name: Re-create a redis container
docker_container:
name: myredis
image: redis
command: redis-server --appendonly yes
state: present
recreate: yes
expose:
- 6379
volumes_from:
- mydata
- name: Restart a container
docker_container:
name: myapplication
image: someuser/appimage
state: started
restart: yes
links:
- "myredis:aliasedredis"
devices:
- "/dev/sda:/dev/xvda:rwm"
ports:
- "8080:9000"
- "127.0.0.1:8081:9001/udp"
env:
SECRET_KEY: ssssh
- name: Container present
docker_container:
name: mycontainer
state: present
recreate: yes
forcekill: yes
image: someplace/image
command: echo "I'm here!"
- name: Start 4 load-balanced containers
docker_container:
name: "container{{ item }}"
state: started
recreate: yes
image: someuser/anotherappimage
command: sleep 1d
with_sequence: count=4
-name: remove container
docker_container:
name: ohno
state: absent
- name: Syslogging output
docker_container:
name: myservice
state: started
log_driver: syslog
log_opt:
syslog-address: tcp://my-syslog-server:514
syslog-facility: daemon
syslog-tag: myservice
```
## Returns:
The JSON object returned by the module will include a *results* object providing `docker inspect` output for the affected container.
```
{
changed: True,
failed: False,
rc: 0
results: {
< the results of `docker inspect` >
}
}
```

View file

@ -1,159 +0,0 @@
# Docker_Files Modules Proposal
## Purpose and Scope
The purpose of docker_files is to provide for retrieving a file or folder from a container's file system,
inserting a file or folder into a container, exporting a container's entire filesystem as a tar archive, or
retrieving a list of changed files from a container's file system.
Docker_files will manage a container using docker-py to communicate with either a local or remote API. It will
support API versions >= 1.14. API connection details will be handled externally in a shared utility module similar to
how other cloud modules operate.
## Parameters
Docker_files accepts the parameters listed below. API connection parameters will be part of a shared utility module
as mentioned above.
```
diff:
description:
- Provide a list of container names or IDs. For each container a list of changed files and directories found on the
container's file system will be returned. Diff is mutually exclusive from all other options except event_type.
Use event_type to choose which events to include in the output.
default: null
export:
description:
- Provide a container name or ID. The container's file system will be exported to a tar archive. Use dest
to provide a path for the archive on the local file system. If the output file already exists, it will not be
overwritten. Use the force option to overwrite an existing archive.
default: null
dest:
description:
- Destination path of copied files. If the destination is a container file system, precede the path with a
container name or ID + ':'. For example, C(mycontainer:/path/to/file.txt). If the destination path does not
exist, it will be created. If the destination path exists on a the local filesystem, it will not be overwritten.
Use the force option to overwrite existing files on the local filesystem.
default: null
force:
description:
- Overwrite existing files on the local filesystem.
default: false
follow_link:
description:
- Follow symbolic links in the src path. If src is local and file is a symbolic link, the symbolic link, not the
target is copied by default. To copy the link target and not the link, set follow_link to true.
default: false
event_type:
description:
- Select the specific event type to list in the diff output.
choices:
- all
- add
- delete
- change
default: all
src:
description:
- The source path of file(s) to be copied. If source files are found on the container's file system, precede the
path with the container name or ID + ':'. For example, C(mycontainer:/path/to/files).
default: null
```
## Examples
```
- name: Copy files from the local file system to a container's file system
docker_files:
src: /tmp/rpm
dest: mycontainer:/tmp
follow_links: yes
- name: Copy files from the container to the local filesystem and overwrite existing files
docker_files:
src: container1:/var/lib/data
dest: /tmp/container1/data
force: yes
- name: Export container filesystem
docker_file:
export: container1
dest: /tmp/conainer1.tar
force: yes
- name: List all differences for multiple containers.
docker_files:
diff:
- mycontainer1
- mycontainer2
- name: Included changed files only in diff output
docker_files:
diff:
- mycontainer1
event_type: change
```
## Returns
Returned from diff:
```
{
changed: false,
failed: false,
rc: 0,
results: {
mycontainer1: [
{ state: 'C', path: '/dev' },
{ state: 'A', path: '/dev/kmsg' },
{ state: 'C', path: '/etc' },
{ state: 'A', path: '/etc/mtab' }
],
mycontainer2: [
{ state: 'C', path: '/foo' },
{ state: 'A', path: '/foo/bar.txt' }
]
}
}
```
Returned when copying files:
```
{
changed: true,
failed: false,
rc: 0,
results: {
src: /tmp/rpms,
dest: mycontainer:/tmp
files_copied: [
'file1.txt',
'file2.jpg'
]
}
}
```
Return when exporting container filesystem:
```
{
changed: true,
failed: false,
rc: 0,
results: {
src: container_name,
dest: local/path/archive_name.tar
}
}
```

View file

@ -1,47 +0,0 @@
# Docker_Image_Facts Module Proposal
## Purpose and Scope
The purpose of docker_image_facts is to inspect docker images.
Docker_image_facts will use docker-py to communicate with either a local or remote API. It will
support API versions >= 1.14. API connection details will be handled externally in a shared utility module similar
to how other cloud modules operate.
## Parameters
Docker_image_facts will support the parameters listed below. API connection parameters will be part of a shared
utility module as mentioned above.
```
name:
description:
- An image name or list of image names. The image name can include a tag using the format C(name:tag).
default: null
```
## Examples
```
- name: Inspect all images
docker_image_facts
register: image_facts
- name: Inspect a single image
docker_image_facts:
name: myimage:v1
register: myimage_v1_facts
```
## Returns
```
{
changed: False
failed: False
rc: 0
result: [ < inspection output > ]
}
```

View file

@ -1,207 +0,0 @@
# Docker_Image Module Proposal
## Purpose and Scope
The purpose is to update the existing docker_image module. The updates include expanding the module's capabilities to
match the build, load, pull, push, rmi, and save docker commands and adding support for remote registries.
Docker_image will manage images using docker-py to communicate with either a local or remote API. It will
support API versions >= 1.14. API connection details will be handled externally in a shared utility module similar
to how other cloud modules operate.
## Parameters
Docker_image will support the parameters listed below. API connection parameters will be part of a shared utility
module as mentioned above.
```
archive_path:
description:
- Save image to the provided path. Use with state present to always save the image to a tar archive. If
intermediate directories in the path do not exist, they will be created. If a matching
archive already exists, it will be overwritten.
default: null
config_path:
description:
- Path to a custom docker config file. Docker-py defaults to using ~/.docker/config.json.
cgroup_parent:
description:
- Optional parent cgroup for build containers.
default: null
cpu_shares:
description:
- CPU shares for build containers. Integer value.
default: 0
cpuset_cpus:
description:
- CPUs in which to allow build container execution C(1,3) or C(1-3).
default: null
dockerfile:
description:
- Name of dockerfile to use when building an image.
default: Dockerfile
email:
description:
- The email for the registry account. Provide with username and password when credentials are not encoded
in docker configuration file or when encoded credentials should be updated.
default: null
nolog: true
force:
description:
- Use with absent state to un-tag and remove all images matching the specified name. Use with present state to
force a pull or rebuild of the image.
default: false
load_path:
description:
- Use with state present to load a previously save image. Provide the full path to the image archive file.
default: null
memory:
description:
- Build container limit. Memory limit specified as a positive integer for number of bytes.
memswap:
description:
- Build container limit. Total memory (memory + swap). Specify as a positive integer for number of bytes or
-1 to disable swap.
default: null
name:
description:
- Image name or ID.
required: true
nocache:
description:
- Do not use cache when building an image.
deafult: false
password:
description:
- Password used when connecting to the registry. Provide with username and email when credentials are not encoded
in docker configuration file or when encoded credentials should be updated.
default: null
nolog: true
path:
description:
- Path to Dockerfile and context from which to build an image.
default: null
push:
description:
- Use with state present to always push an image to the registry.
default: false
registry:
description:
- URL of the registry. If not provided, defaults to Docker Hub.
default: null
rm:
description:
- Remove intermediate containers after build.
default: true
tag:
description:
- Image tags. When pulling or pushing, set to 'all' to include all tags.
default: latest
url:
description:
- The location of a Git repository. The repository acts as the context when building an image.
- Mutually exclusive with path.
username:
description:
- Username used when connecting to the registry. Provide with password and email when credentials are not encoded
in docker configuration file or when encoded credentials should be updated.
default: null
nolog: true
state:
description:
- "absent" - if image exists, unconditionally remove it. Use the force option to un-tag and remove all images
matching the provided name.
- "present" - check if image is present with the provided tag. If the image is not present or the force option
is used, the image will either be pulled from the registry, built or loaded from an archive. To build the image,
provide a path or url to the context and Dockerfile. To load an image, use load_path to provide a path to
an archive file. If no path, url or load_path is provided, the image will be pulled. Use the registry
parameters to control the registry from which the image is pulled.
required: false
default: present
choices:
- absent
- present
http_timeout:
description:
- Timeout for HTTP requests during the image build operation. Provide a positive integer value for the number of
seconds.
default: null
```
## Examples
```
- name: build image
docker_image:
path: "/path/to/build/dir"
name: "my_app"
tags:
- v1.0
- mybuild
- name: force pull an image and all tags
docker_image:
name: "my/app"
force: yes
tags: all
- name: untag and remove image
docker_image:
name: "my/app"
state: absent
force: yes
- name: push an image to Docker Hub with all tags
docker_image:
name: my_image
push: yes
tags: all
- name: pull image from a private registry
docker_image:
name: centos
registry: https://private_registry:8080
```
## Returns
```
{
changed: True
failed: False
rc: 0
action: built | pulled | loaded | removed | none
msg: < text confirming the action that was taken >
results: {
< output from docker inspect for the affected image >
}
}
```

View file

@ -1,48 +0,0 @@
# Docker_Network_Facts Module Proposal
## Purpose and Scope
Docker_network_facts will inspect networks.
Docker_network_facts will use docker-py to communicate with either a local or remote API. It will
support API versions >= 1.14. API connection details will be handled externally in a shared utility module similar
to how other cloud modules operate.
## Parameters
Docker_network_facts will accept the parameters listed below. API connection parameters will be part of a shared
utility module as mentioned above.
```
name:
description:
- Network name or list of network names.
default: null
```
## Examples
```
- name: Inspect all networks
docker_network_facts
register: network_facts
- name: Inspect a specific network and format the output
docker_network_facts
name: web_app
register: web_app_facts
```
# Returns
```
{
changed: False
failed: False
rc: 0
results: [ < inspection output > ]
}
```

View file

@ -1,130 +0,0 @@
# Docker_Network Module Proposal
## Purpose and Scope:
The purpose of Docker_network is to create networks, connect containers to networks, disconnect containers from
networks, and delete networks.
Docker network will manage networks using docker-py to communicate with either a local or remote API. It will
support API versions >= 1.14. API connection details will be handled externally in a shared utility module similar to
how other cloud modules operate.
## Parameters:
Docker_network will accept the parameters listed below. Parameters related to connecting to the API will be handled in
a shared utility module, as mentioned above.
```
connected:
description:
- List of container names or container IDs to connect to a network.
default: null
driver:
description:
- Specify the type of network. Docker provides bridge and overlay drivers, but 3rd party drivers can also be used.
default: bridge
driver_options:
description:
- Dictionary of network settings. Consult docker docs for valid options and values.
default: null
force:
description:
- With state 'absent' forces disconnecting all containers from the network prior to deleting the network. With
state 'present' will disconnect all containers, delete the network and re-create the network.
default: false
incremental:
description:
- By default the connected list is canonical, meaning containers not on the list are removed from the network.
Use incremental to leave existing containers connected.
default: false
ipam_driver:
description:
- Specifiy an IPAM driver.
default: null
ipam_options:
description:
- Dictionary of IPAM options.
default: null
network_name:
description:
- Name of the network to operate on.
default: null
required: true
state:
description:
- "absent" deletes the network. If a network has connected containers, it cannot be deleted. Use the force option
to disconnect all containers and delete the network.
- "present" creates the network, if it does not already exist with the specified parameters, and connects the list
of containers provided via the connected parameter. Containers not on the list will be disconnected. An empty
list will leave no containers connected to the network. Use the incremental option to leave existing containers
connected. Use the force options to force re-creation of the network.
default: present
choices:
- absent
- present
```
## Examples:
```
- name: Create a network
docker_network:
name: network_one
- name: Remove all but selected list of containers
docker_network:
name: network_one
connected:
- containera
- containerb
- containerc
- name: Remove a single container
docker_network:
name: network_one
connected: "{{ fulllist|difference(['containera']) }}"
- name: Add a container to a network, leaving existing containers connected
docker_network:
name: network_one
connected:
- containerc
incremental: yes
- name: Create a network with options (Not sure if 'ip_range' is correct key name)
docker_network
name: network_two
options:
subnet: '172.3.26.0/16'
gateway: 172.3.26.1
ip_range: '192.168.1.0/24'
- name: Delete a network, disconnecting all containers
docker_network:
name: network_one
state: absent
force: yes
```
## Returns:
```
{
changed: True,
failed: false
rc: 0
action: created | removed | none
results: {
< results from docker inspect for the affected network >
}
}
```

View file

@ -1,48 +0,0 @@
# Docker_Volume_Facts Module Proposal
## Purpose and Scope
Docker_volume_facts will inspect volumes.
Docker_volume_facts will use docker-py to communicate with either a local or remote API. It will
support API versions >= 1.14. API connection details will be handled externally in a shared utility module similar
to how other cloud modules operate.
## Parameters
Docker_volume_facts will accept the parameters listed below. API connection parameters will be part of a shared
utility module as mentioned above.
```
name:
description:
- Volume name or list of volume names.
default: null
```
## Examples
```
- name: Inspect all volumes
docker_volume_facts
register: volume_facts
- name: Inspect a specific volume
docker_volume_facts:
name: data
register: data_vol_facts
```
# Returns
```
{
changed: False
failed: False
rc: 0
results: [ < output from volume inspection > ]
}
```

View file

@ -1,82 +0,0 @@
# Docker_Volume Modules Proposal
## Purpose and Scope
The purpose of docker_volume is to manage volumes.
Docker_volume will manage volumes using docker-py to communicate with either a local or remote API. It will
support API versions >= 1.14. API connection details will be handled externally in a shared utility module similar
to how other cloud modules operate.
## Parameters
Docker_volume accepts the parameters listed below. Parameters for connecting to the API are not listed here, as they
will be part of the shared module mentioned above.
```
driver:
description:
- Volume driver.
default: local
force:
description:
- Use with state 'present' to force removal and re-creation of an existing volume. This will not remove and
re-create the volume if it is already in use.
name:
description:
- Name of the volume.
required: true
default: null
options:
description:
- Dictionary of driver specific options. The local driver does not currently support
any options.
default: null
state:
description:
- "absent" removes a volume. A volume cannot be removed if it is in use.
- "present" create a volume with the specified name, if the volume does not already exist. Use the force
option to remove and re-create a volume. Even with the force option a volume cannot be removed and re-created if
it is in use.
default: present
choices:
- absent
- present
```
## Examples
```
- name: Create a volume
docker_volume:
name: data
- name: Remove a volume
docker_volume:
name: data
state: absent
- name: Re-create an existing volume
docker_volume:
name: data
state: present
force: yes
```
## Returns
```
{
changed: true,
failed: false,
rc: 0,
action: removed | created | none
results: {
< show the result of docker inspect of an affected volume >
}
}
```

View file

@ -1,110 +0,0 @@
# Proposal: Proposals - have a process and documentation
*Author*: Robyn Bergeron <@robynbergeron>
*Date*: 04/03/2016
- Status: New
- Proposal type: community development process
- Targeted Release: Forever, until we improve it more at a later date.
- PR for Comments: https://github.com/ansible/ansible/pull/14802#
- Estimated time to implement: 2 weeks at most
Comments on this proposal prior to acceptance are accepted in the comments section of the pull request linked above.
## Motivation
Define light process for how proposals are created and accepted, and document the process permanently in community.html somewhere.
The following suggested process was created with the following ideas in mind:
- Transparency: notifications, decisions made in public meetings, etc. helps people to know what is going on.
- Avoid proliferation of multiple comments in multiple places; keep everything in the PR.
- Action is being taken: Knowing when and where decisions are made, and knowing who is the final authority, gives people the sense that things are moving.
- Ensure that new features or enhancements are added to the roadmap and release notes.
### Problems
Proposals are confusing. Should I write one? Where do I put it? Why cant I find any documentation about this? Who approves things? This is why we should have a light and unbureaucratic process.
## Solution proposal
This proposal has multiple parts:
- Proposed process for submitting / accepting proposals
- Suggested proposal template
Once the process and template are approved, a PR will be submitted for documenting the process permanently in documentation, as well as a PR to ansible/docs/proposals for the proposal template.
### Proposed Process
1: PROPOSAL CREATION
- Person making the proposal creates the proposal document in ansible/proposals via PR, following the proposal template/
- Person making the proposal creates an issue in ansible/proposals for that proposal.
- Author of proposal PR updates the proposal with link to the created issue #.
- Notify the community that this proposal exists.
- Author notifies ansible-devel mailing list for transparency, providing link to issue.
- Author includes commentary indicating that comments should *not* be in response to this email, but rather, community members should add comments or feedback in the issue.
- PRs may be made to the proposal, and can merged or not at submitter's discretion, and should be discussed/linked in the issue.
2: KEEP THE PROPOSAL MOVING TOWARDS A DECISION.
- Create tags in the ansible/proposals repo to indicate progress of the various proposal issues; ie: Discussion, Ready for meeting, Approved. (Can be used in conjunction with a board on waffle.io to show this, kanban style.)
- Proposals use public meetings as a mechanism to keep them moving.
- All proposals are decided on in a public meeting by a combination of folks with commit access to Ansible and any interested parties / users, as well as the author of the proposal. Time for approvals will be a portion of the overall schedule; proposals will be reviewed in the order received and may occasionally be deferred to the next meeting. If we are overwhelmed, a separate meeting may be scheduled.
(Note: ample feedback in the comments of the proposal issue should allow for folks to come to broad consensus in one way or another in the meeting rather rapidly, generally without an actual counted vote. However, the decision should be made *in the meeting*, so as to avoid any questions around whether or not the approval of one Ansible maintain / committer reflects the opinions or decision of everyone.)
- *New* proposals are explicitly added to the public IRC meeting agenda for each week by the meeting organizer for for acknowledgement of ongoing discussion and existence, and/or easy approval/rejection. (Either via a separate issue somewhere tracking any meeting items, or by adding a “meeting” label to the PR.)
- Existing new, not-yet-approved proposals are reviewed weekly by meeting organizer to check for slow-moving/stalled proposals, or for flags from the proposal owner indicating that they'd like to have it addressed in the weeks meeting
3: PROPOSAL APPROVED
- Amendments needed to the proposal after IRC discussion should be made immediately.
- The proposal status should be changed to Approved / In Progress in the document.
- The proposal should be moved from /ansible/proposals to a roadmap folder (or similar).
- The proposal issue comments should be updated with a note by the meeting organizer that the proposal has been accepted, and further commentary should be in the PRs implementing the code itself.
- Proposals can also be PENDING or NEEDS INFO (waiting on something), or DECLINED.
4: CODE IN PROGRESS
- Approved proposals should be periodically checked for progress, especially if tied to a release and/or is noted as release blocking.
- PRs implementing the proposal are recommended to link to the original proposal PR or document for context.
5: CODE COMPLETE
- Proposal document, which should be in docs/roadmap, should have their status updated to COMPLETE.
- The release notes file for the targeted release should be updated with a small note regarding the feature or enhancement; completed proposals for community processes should have a follow-up mail sent to the mailing list providing information and links to the new process.
- Hooray! Buy your friend a tasty beverage of their choosing.
### Suggested Proposal Template Outline
Following the .md convention, a proposal template should go in the docs/proposals repository. This is a suggested outline; the template will provide more guidance / context and will be submitted as a PR upon approval of this proposal.
Please note that, in line with the above guidance that some processes will require fine-tuning over time, that the suggested template outline below, as well as the final submitted template to the docs/proposals repo has wiggle room in terms of description, and that what makes sense may vary from one proposal to another. The expectation is that people will simply do what seems right, and over time well figure out what works best — but in the meantime, guidance is nice.
#### TEMPLATE OUTLINE
- Proposal Title
- Author (w/github ID linked)
- Date:
- Status: New, Approved, Pending, Complete
- Proposal type: Feature / enhancement / community development process
- Targeted Release:
- PR for comments:
- Estimated time to implement:
Comments on this proposal prior to acceptance are accepted in the comments of the PR linked above.
- Motivation / Problems solved:
- Proposed Solution: (what youre doing, and why; keeping this loose for now.)
Other Suggested things to include:
- Dependencies / requirements:
- Testing:
- Documentation:
## Dependencies / requirements
- Approval of this proposed process is needed to create the actual documentation of the process.
- Weekly, public IRC meetings (which should probably be documented Wrt time / day of week / etc. in the contributor documentation) of the Ansible development community.
- Creation of appropriate labels in GitHub (or defining some other mechanism to gather items for a weekly meeting agenda, such as a separate issue in GitHub that links to the PRs.)
- Coming to an agreement regarding “what qualifies as a feature or enhancement that requires a proposal, vs. just submitting a PR with code.” It could simply be that if the change is large or very complicated, our recommendation is always to file a proposal to ensure (a) transparency (b) that a contributor doesnt waste their time on something that ultimately cant be merged at this time.
- Nice to have: Any new proposal PR landing in ansible/proposals is automatically merged and an email automatically notifies the mailing list of the existence and location of the proposal & related issue # for comments.
## Testing
Testing of this proposal will literally be via submitting this proposal through the proposed proposal process. If it fails miserably, well know it needs fine-tuning or needs to go in the garbage can.
## Documentation:
- Documentation of the process, including “what is a feature or enhancement vs. just a regular PR,” along with the steps shown above, will be added to the Ansible documentation in .rst format via PR. The documentation should also provide guidance on the standard wording of the email notifying ansible-devel list that the proposal exists and is ready for review in the issue comments.
- A proposal template should also be created in the ansible/proposals repo directory.

View file

@ -1,205 +0,0 @@
# Publish / Subscribe for Handlers
*Author*: René Moser <@resmo>
*Date*: 07/03/2016
## Motivation
In some use cases a publish/subscribe kind of event to run a handler is more convenient, e.g. restart services after replacing SSL certs.
However, ansible does not provide a built-in way to handle it yet.
### Problem
If your SSL cert changes, you usually have to reload/restart services to use the new certificate.
However, If you have a ssl role or a generic ssl play, you usually don't want to add specific handlers to it.
Instead it would be much more convenient to use a publish/subscribe kind of paradigm in the roles where the services are configured in.
The way we implemented it currently:
I use notify to set a fact where later (in different plays) we act on a fact using notify again.
~~~yaml
---
- hosts: localhost
gather_facts: no
tasks:
- name: copy an ssl cert
shell: echo cert has been changed
notify: publish ssl cert change
handlers:
- name: publish ssl cert change
set_fact:
ssl_cert_changed: true
- hosts: localhost
gather_facts: no
tasks:
- name: subscribe for ssl cert change
shell: echo cert changed
notify: service restart one
when: ssl_cert_changed is defined and ssl_cert_changed
handlers:
- name: service restart one
shell: echo service one restarted
- hosts: localhost
gather_facts: no
tasks:
- name: subscribe for ssl cert change
shell: echo cert changed
when: ssl_cert_changed is defined and ssl_cert_changed
notify: service restart two
handlers:
- name: service restart two
shell: echo service two restarted
~~~
However, this looks like a workaround of a feature that ansible should provide in a much cleaner way.
## Approaches
### Approach 1:
Provide new `subscribe` keyword on handlers:
~~~yaml
- hosts: localhost
gather_facts: no
tasks:
- name: copy an ssl cert
shell: echo cert has been changed
- hosts: localhost
gather_facts: no
handlers:
- name: service restart one
shell: echo service one restarted
subscribe: copy an ssl cert
- hosts: localhost
gather_facts: no
handlers:
- name: service restart two
shell: echo service two restarted
subscribe: copy an ssl cert
~~~
### Approach 2:
Provide new `subscribe` on handlers and `publish` keywords in tasks:
~~~yaml
- hosts: localhost
gather_facts: no
tasks:
- name: copy an ssl cert
shell: echo cert has been changed
publish: yes
- hosts: localhost
gather_facts: no
handlers:
- name: service restart one
shell: echo service one restarted
subscribe: copy an ssl cert
- hosts: localhost
gather_facts: no
handlers:
- name: service restart two
shell: echo service two restarted
subscribe: copy an ssl cert
~~~
### Approach 3:
Provide new `subscribe` module:
A subscribe module could consume the results of a task by name, optionally the value to react on could be specified (default: `changed`)
~~~yaml
- hosts: localhost
gather_facts: no
tasks:
- name: copy an ssl cert
shell: echo cert has been changed
- hosts: localhost
gather_facts: no
tasks:
- subscribe:
name: copy an ssl cert
notify: service restart one
handlers:
- name: service restart one
shell: echo service one restarted
- hosts: localhost
gather_facts: no
tasks:
- subscribe:
name: copy an ssl cert
react_on: changed
notify: service restart two
handlers:
- name: service restart two
shell: echo service two restarted
~~~
### Approach 4:
Provide new `subscribe` module (same as Approach 3) and `publish` keyword:
~~~yaml
- hosts: localhost
gather_facts: no
tasks:
- name: copy an ssl cert
shell: echo cert has been changed
publish: yes
- hosts: localhost
gather_facts: no
tasks:
- subscribe:
name: copy an ssl cert
notify: service restart one
handlers:
- name: service restart one
shell: echo service one restarted
- hosts: localhost
gather_facts: no
tasks:
- subscribe:
name: copy an ssl cert
notify: service restart two
handlers:
- name: service restart two
shell: echo service two restarted
~~~
### Clarifications about role dependencies and publish
When using service roles having the subscription handlers and the publish task (e.g. cert change) is defined in a depended role (SSL role) only the first service role running the "cert change" task as dependency will trigger the publish.
In any other service role in the playbook having "SSL role" as dependency, the task won't be `changed` anymore.
Therefore a once published "message" should not be overwritten or so called "unpublished" by running the same task in a followed role in the playbook.
## Conclusion
Feedback is requested to improve any of the above approaches, or provide further approaches to solve this problem.

View file

@ -1,77 +0,0 @@
# Proposal: Re-run handlers cli option
*Author*: René Moser <@resmo>
*Date*: 07/03/2016
- Status: New
## Motivation
The most annoying thing users face using ansible in production is running handlers manually after a task failed after a notified handler.
### Problems
Handler notifications get lost after a task failed and there is no help from ansible to catch up the notified handlers in a next ansible playbook run.
~~~yaml
- hosts: localhost
gather_facts: no
tasks:
- name: simple task
shell: echo foo
notify: get msg out
- name: this tasks fails
fail: msg="something went wrong"
handlers:
- name: get msg out
shell: echo handler run
~~~
Result:
~~~
$ ansible-playbook test.yml
PLAY ***************************************************************************
TASK [simple task] *************************************************************
changed: [localhost]
TASK [this tasks fails] ********************************************************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "something went wrong"}
NO MORE HOSTS LEFT *************************************************************
RUNNING HANDLER [get msg out] **************************************************
to retry, use: --limit @test.retry
PLAY RECAP *********************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=1
~~~
## Solution proposal
Similar to retry, ansible should provide a way to manully invoke a list of handlers additionaly to the notified handlers in the plays:
~~~
$ ansible-playbook test.yml --notify-handlers <handler>,<handler>,<handler>
$ ansible-playbook test.yml --notify-handlers @test.handlers
~~~
Example:
~~~
$ ansible-playbook test.yml --notify-handlers "get msg out"
~~~
The stdout of a failed play should provide an example how to run notified handlers in the next run:
~~~
...
RUNNING HANDLER [get msg out] **************************************************
to retry, use: --limit @test.retry --notify-handlers @test.handlers
~~~

View file

@ -1,34 +0,0 @@
# Rename always_run to ignore_checkmode
*Author*: René Moser <@resmo>
*Date*: 02/03/2016
## Motivation
The task argument `always_run` is misleading.
Ansible is known to be readable by users without deep knowledge of creating playbooks, they do not understand
what `always_run` does at the first glance.
### Problems
The following looks scary if you have no idea, what `always_run` does:
```
- shell: dangerous_cleanup.sh
when: cleanup == "yes"
always_run: yes
```
You have a conditional but also a word that says `always`. This is a conflict in terms of understanding.
## Solution Proposal
Deprecate `always_run` by rename it to `ignore_checkmode`:
```
- shell: dangerous_cleanup.sh
when: cleanup == "yes"
ignore_checkmode: yes
```