mirror of
https://github.com/ansible-collections/community.general.git
synced 2024-09-14 20:13:21 +02:00
Docs how to test (2nd) (#24094)
* Big testing doc refactor * Combine all the testing documentation in to one place to make it easier to find * Convert everything to RST * Create testing_network guide * Create testing landing page * For each section detail "how to run" and "how to extend testing" * More examples * Lots more detail
This commit is contained in:
parent
bc22223d63
commit
ecbf8e933a
22 changed files with 866 additions and 558 deletions
2
Makefile
2
Makefile
|
@ -9,7 +9,7 @@
|
||||||
# make deb-src -------------- produce a DEB source
|
# make deb-src -------------- produce a DEB source
|
||||||
# make deb ------------------ produce a DEB
|
# make deb ------------------ produce a DEB
|
||||||
# make docs ----------------- rebuild the manpages (results are checked in)
|
# make docs ----------------- rebuild the manpages (results are checked in)
|
||||||
# make tests ---------------- run the tests (see test/README.md for requirements)
|
# make tests ---------------- run the tests (see https://docs.ansible.com/ansible/dev_guide/testing_units.html for requirements)
|
||||||
# make pyflakes, make pep8 -- source code checks
|
# make pyflakes, make pep8 -- source code checks
|
||||||
|
|
||||||
########################################################
|
########################################################
|
||||||
|
|
|
@ -59,6 +59,8 @@ The following topics will discuss how to develop and work with modules:
|
||||||
Best practices, recommendations, and things to avoid.
|
Best practices, recommendations, and things to avoid.
|
||||||
:doc:`developing_modules_checklist`
|
:doc:`developing_modules_checklist`
|
||||||
Checklist for contributing your module to Ansible.
|
Checklist for contributing your module to Ansible.
|
||||||
|
:doc:`testing`
|
||||||
|
Developing unit and integration tests.
|
||||||
:doc:`developing_python3`
|
:doc:`developing_python3`
|
||||||
Adding Python 3 support to modules (all new modules must be Python-2.6 and Python-3.5 compatible).
|
Adding Python 3 support to modules (all new modules must be Python-2.6 and Python-3.5 compatible).
|
||||||
:doc:`developing_modules_in_groups`
|
:doc:`developing_modules_in_groups`
|
||||||
|
|
|
@ -350,3 +350,5 @@ To test your documentation against your ``argument_spec`` you can use ``validate
|
||||||
|
|
||||||
If you're having a problem with the syntax of your YAML you can
|
If you're having a problem with the syntax of your YAML you can
|
||||||
validate it on the `YAML Lint <http://www.yamllint.com/>`_ website.
|
validate it on the `YAML Lint <http://www.yamllint.com/>`_ website.
|
||||||
|
|
||||||
|
For more information in testing, including how to add unit and integration tests, see :doc:`testing`.
|
||||||
|
|
|
@ -25,8 +25,9 @@ Although it's tempting to get straight into coding, there are a few things to be
|
||||||
* Have a look at the existing modules and how they've been named in the :doc:`../list_of_all_modules`, especially in the same functional area (such as cloud, networking, databases).
|
* Have a look at the existing modules and how they've been named in the :doc:`../list_of_all_modules`, especially in the same functional area (such as cloud, networking, databases).
|
||||||
* Shared code can be placed into ``lib/ansible/module_utils/``
|
* Shared code can be placed into ``lib/ansible/module_utils/``
|
||||||
* Shared documentation (for example describing common arguments) can be placed in ``lib/ansible/utils/module_docs_fragments/``.
|
* Shared documentation (for example describing common arguments) can be placed in ``lib/ansible/utils/module_docs_fragments/``.
|
||||||
* With great power comes great responsiblity: Ansible module maintainers have a duty to help keep modules up to date. As with all successful community projects, module maintainers should keep a watchful eye for reported issues and contributions.
|
* With great power comes great responsibility: Ansible module maintainers have a duty to help keep modules up to date. As with all successful community projects, module maintainers should keep a watchful eye for reported issues and contributions.
|
||||||
* Although not required, unit and/or integration tests are strongly recommended. Unit tests are especially valuable when external resources (such as cloud or network devices) are required. For more information see ``test/`` and the `Testing Working Group <https://github.com/ansible/community/blob/master/MEETINGS.md>`_.
|
* Although not required, unit and/or integration tests are strongly recommended. Unit tests are especially valuable when external resources (such as cloud or network devices) are required. For more information see :doc:`testing` and the `Testing Working Group <https://github.com/ansible/community/blob/master/MEETINGS.md>`_.
|
||||||
|
* Starting with Ansible 2.4 all :doc:`../list_of_network_modules` MUST have unit tests.
|
||||||
|
|
||||||
Naming Convention
|
Naming Convention
|
||||||
`````````````````
|
`````````````````
|
||||||
|
@ -96,7 +97,7 @@ After publishing your PR to https://github.com/ansible/ansible, a Shippable CI t
|
||||||
If you need further advice, consider join the ``#ansible-devel`` IRC channel (see how in the "Where to get support").
|
If you need further advice, consider join the ``#ansible-devel`` IRC channel (see how in the "Where to get support").
|
||||||
|
|
||||||
|
|
||||||
We have a "ansibot" helper that comments on GitHub Issues and PRs which should highlight important information.
|
We have a ``ansibullbot`` helper that comments on GitHub Issues and PRs which should highlight important information.
|
||||||
|
|
||||||
|
|
||||||
Subsequent PRs
|
Subsequent PRs
|
||||||
|
@ -146,4 +147,3 @@ Please note that in the Ansible Git Repo the main branch is called ``devel`` rat
|
||||||
After your first PR has been merged ensure you "sync your fork" with ``ansible/ansible`` to ensure you've pulled in the directory structure and and shared code or documentation previously created.
|
After your first PR has been merged ensure you "sync your fork" with ``ansible/ansible`` to ensure you've pulled in the directory structure and and shared code or documentation previously created.
|
||||||
|
|
||||||
As stated in the GitHub documentation, always use feature branches for your PRs, never commit directly into `devel`.
|
As stated in the GitHub documentation, always use feature branches for your PRs, never commit directly into `devel`.
|
||||||
|
|
||||||
|
|
|
@ -1,205 +0,0 @@
|
||||||
Helping Testing PRs
|
|
||||||
```````````````````
|
|
||||||
|
|
||||||
If you're a developer, one of the most valuable things you can do is look at the github issues list and help fix bugs. We almost always prioritize bug fixing over
|
|
||||||
feature development, so clearing bugs out of the way is one of the best things you can do.
|
|
||||||
|
|
||||||
Even if you're not a developer, helping test pull requests for bug fixes and features is still immensely valuable.
|
|
||||||
|
|
||||||
This goes for testing new features as well as testing bugfixes.
|
|
||||||
|
|
||||||
In many cases, code should add tests that prove it works but that's not ALWAYS possible and tests are not always comprehensive, especially when a user doesn't have access
|
|
||||||
to a wide variety of platforms, or that is using an API or web service.
|
|
||||||
|
|
||||||
In these cases, live testing against real equipment can be more valuable than automation that runs against simulated interfaces.
|
|
||||||
In any case, things should always be tested manually the first time too.
|
|
||||||
|
|
||||||
Thankfully helping test ansible is pretty straightforward, assuming you are already used to how ansible works.
|
|
||||||
|
|
||||||
Get Started with A Source Checkout
|
|
||||||
++++++++++++++++++++++++++++++++++
|
|
||||||
|
|
||||||
You can do this by checking out ansible, making a test branch off the main one, merging a GitHub issue, testing,
|
|
||||||
and then commenting on that particular issue on GitHub. Here's how:
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
Testing source code from GitHub pull requests sent to us does have some inherent risk, as the source code
|
|
||||||
sent may have mistakes or malicious code that could have a negative impact on your system. We recommend
|
|
||||||
doing all testing on a virtual machine, whether a cloud instance, or locally. Some users like Vagrant
|
|
||||||
or Docker for this, but they are optional. It is also useful to have virtual machines of different Linux or
|
|
||||||
other flavors, since some features (apt vs. yum, for example) are specific to those OS versions.
|
|
||||||
|
|
||||||
First, you will need to configure your testing environment with the necessary tools required to run our test
|
|
||||||
suites. You will need at least::
|
|
||||||
|
|
||||||
git
|
|
||||||
python-nosetests (sometimes named python-nose)
|
|
||||||
python-passlib
|
|
||||||
python-mock
|
|
||||||
|
|
||||||
If you want to run the full integration test suite you'll also need the following packages installed::
|
|
||||||
|
|
||||||
svn
|
|
||||||
hg
|
|
||||||
python-pip
|
|
||||||
gem
|
|
||||||
|
|
||||||
Second, if you haven't already, clone the Ansible source code from GitHub::
|
|
||||||
|
|
||||||
git clone https://github.com/ansible/ansible.git --recursive
|
|
||||||
cd ansible/
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
If you have previously forked the repository on GitHub, you could also clone it from there.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
If updating your repo for testing something module related, use "git rebase origin/devel" and then "git submodule update" to fetch
|
|
||||||
the latest development versions of modules. Skipping the "git submodule update" step will result in versions that will be stale.
|
|
||||||
|
|
||||||
Activating The Source Checkout
|
|
||||||
++++++++++++++++++++++++++++++
|
|
||||||
|
|
||||||
The Ansible source includes a script that allows you to use Ansible directly from source without requiring a
|
|
||||||
full installation, that is frequently used by developers on Ansible.
|
|
||||||
|
|
||||||
Simply source it (to use the Linux/Unix terminology) to begin using it immediately::
|
|
||||||
|
|
||||||
source ./hacking/env-setup
|
|
||||||
|
|
||||||
This script modifies the PYTHONPATH enviromnent variables (along with a few other things), which will be temporarily
|
|
||||||
set as long as your shell session is open.
|
|
||||||
|
|
||||||
If you'd like your testing environment to always use the latest source, you could call the command from startup scripts (for example,
|
|
||||||
`.bash_profile`).
|
|
||||||
|
|
||||||
Finding A Pull Request and Checking It Out On A Branch
|
|
||||||
++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
|
||||||
|
|
||||||
Next, find the pull request you'd like to test and make note of the line at the top which describes the source
|
|
||||||
and destination repositories. It will look something like this::
|
|
||||||
|
|
||||||
Someuser wants to merge 1 commit into ansible:devel from someuser:feature_branch_name
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
It is important that the PR request target be ansible:devel, as we do not accept pull requests into any other branch. Dot releases are cherry-picked manually by ansible staff.
|
|
||||||
|
|
||||||
The username and branch at the end are the important parts, which will be turned into git commands as follows::
|
|
||||||
|
|
||||||
git checkout -b testing_PRXXXX devel
|
|
||||||
git pull https://github.com/someuser/ansible.git feature_branch_name
|
|
||||||
|
|
||||||
The first command creates and switches to a new branch named testing_PRXXXX, where the XXXX is the actual issue number associated with the pull request (for example, 1234). This branch is based on the devel branch. The second command pulls the new code from the users feature branch into the newly created branch.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
If the GitHub user interface shows that the pull request will not merge cleanly, we do not recommend proceeding if you are not somewhat familiar with git and coding, as you will have to resolve a merge conflict. This is the responsibility of the original pull request contributor.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
Some users do not create feature branches, which can cause problems when they have multiple, un-related commits in their version of `devel`. If the source looks like `someuser:devel`, make sure there is only one commit listed on the pull request.
|
|
||||||
|
|
||||||
Finding a Pull Request for Ansible Modules
|
|
||||||
++++++++++++++++++++++++++++++++++++++++++
|
|
||||||
Ansible modules are in separate repositories, which are managed as Git submodules. Here's a step by step process for checking out a PR for an Ansible extras module, for instance:
|
|
||||||
|
|
||||||
1. git clone https://github.com/ansible/ansible.git
|
|
||||||
2. cd ansible
|
|
||||||
3. git submodule init
|
|
||||||
4. git submodule update --recursive [ fetches the submodules ]
|
|
||||||
5. cd lib/ansible/modules/extras
|
|
||||||
6. git fetch origin pull/1234/head:pr/1234 [ fetches the specific PR ]
|
|
||||||
7. git checkout pr/1234 [ do your testing here ]
|
|
||||||
8. cd /path/to/ansible/clone
|
|
||||||
9. git submodule update --recursive
|
|
||||||
|
|
||||||
For Those About To Test, We Salute You
|
|
||||||
++++++++++++++++++++++++++++++++++++++
|
|
||||||
|
|
||||||
At this point, you should be ready to begin testing!
|
|
||||||
|
|
||||||
If the PR is a bug-fix pull request, the first things to do are to run the suite of unit and integration tests, to ensure
|
|
||||||
the pull request does not break current functionality:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# Unit Tests
|
|
||||||
make tests
|
|
||||||
|
|
||||||
# Integration Tests
|
|
||||||
make -C test/integration
|
|
||||||
|
|
||||||
# Run specific integration test-target 'file' (as root)
|
|
||||||
sudo ./test/runner/ansible-test integration file -vv
|
|
||||||
|
|
||||||
# Run specific integration test-target 'file' (in docker)
|
|
||||||
./test/runner/ansible-test integration file --docker
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
Ansible does provide integration tests for cloud-based modules as well, however we do not recommend using them for some users
|
|
||||||
due to the associated costs from the cloud providers. As such, typically it's better to run specific parts of the integration battery
|
|
||||||
and skip these tests.
|
|
||||||
|
|
||||||
Integration tests aren't the end all beat all - in many cases what is fixed might not *HAVE* a test, so determining if it works means
|
|
||||||
checking the functionality of the system and making sure it does what it said it would do.
|
|
||||||
|
|
||||||
Pull requests for bug-fixes should reference the bug issue number they are fixing.
|
|
||||||
|
|
||||||
We encourage users to provide playbook examples for bugs that show how to reproduce the error, and these playbooks should be used to verify the bugfix does resolve
|
|
||||||
the issue if available. You may wish to also do your own review to poke the corners of the change.
|
|
||||||
|
|
||||||
Since some reproducers can be quite involved, you might wish to create a testing directory with the issue # as a sub-
|
|
||||||
directory to keep things organized:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
mkdir -p testing/XXXX # where XXXX is again the issue # for the original issue or PR
|
|
||||||
cd testing/XXXX
|
|
||||||
# create files or git clone example playbook repo
|
|
||||||
|
|
||||||
While it should go without saying, be sure to read any playbooks before you run them. VMs help with running untrusted content greatly,
|
|
||||||
though a playbook could still do something to your computing resources that you'd rather not like.
|
|
||||||
|
|
||||||
Once the files are in place, you can run the provided playbook (if there is one) to test the functionality:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
ansible-playbook -vvv playbook_name.yml
|
|
||||||
|
|
||||||
If there's no playbook, you may have to copy and paste playbook snippets or run an ad-hoc command that was pasted in.
|
|
||||||
|
|
||||||
Our issue template also included sections for "Expected Output" and "Actual Output", which should be used to gauge the output
|
|
||||||
from the provided examples.
|
|
||||||
|
|
||||||
If the pull request resolves the issue, please leave a comment on the pull request, showing the following information:
|
|
||||||
|
|
||||||
* "Works for me!"
|
|
||||||
* The output from `ansible --version`.
|
|
||||||
|
|
||||||
In some cases, you may wish to share playbook output from the test run as well.
|
|
||||||
|
|
||||||
Example!
|
|
||||||
|
|
||||||
| Works for me! Tested on `Ansible 1.7.1`. I verified this on CentOS 6.5 and also Ubuntu 14.04.
|
|
||||||
|
|
||||||
If the PR does not resolve the issue, or if you see any failures from the unit/integration tests, just include that output instead:
|
|
||||||
|
|
||||||
| This doesn't work for me.
|
|
||||||
|
|
|
||||||
| When I ran this Ubuntu 16.04 it failed with the following:
|
|
||||||
|
|
|
||||||
| \```
|
|
||||||
| BLARG
|
|
||||||
| StrackTrace
|
|
||||||
| RRRARRGGG
|
|
||||||
| \```
|
|
||||||
|
|
||||||
When you are done testing a feature branch, you can remove it with the following command:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
$ git branch -D someuser-feature_branch_name
|
|
||||||
|
|
||||||
We understand some users may be inexperienced with git, or other aspects of
|
|
||||||
the above procedure, so feel free to stop by ansible-devel list for questions
|
|
||||||
and we'd be happy to help answer them.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -1,8 +1,9 @@
|
||||||
|
*********************
|
||||||
Developer Information
|
Developer Information
|
||||||
=====================
|
*********************
|
||||||
|
|
||||||
Ansible Developer Guide
|
Ansible Developer Guide
|
||||||
```````````````````````
|
=======================
|
||||||
|
|
||||||
Welcome to the Ansible Developer Guide!
|
Welcome to the Ansible Developer Guide!
|
||||||
|
|
||||||
|
@ -16,13 +17,13 @@ To get started, select one of the following topics.
|
||||||
|
|
||||||
overview_architecture
|
overview_architecture
|
||||||
developing_modules
|
developing_modules
|
||||||
|
testing
|
||||||
developing_python3
|
developing_python3
|
||||||
developing_plugins
|
developing_plugins
|
||||||
developing_inventory
|
developing_inventory
|
||||||
developing_api
|
developing_api
|
||||||
developing_module_utilities
|
developing_module_utilities
|
||||||
developing_core
|
developing_core
|
||||||
developing_test_pr
|
|
||||||
developing_rebasing
|
developing_rebasing
|
||||||
repomerge
|
repomerge
|
||||||
developing_releases
|
developing_releases
|
||||||
|
|
199
docs/docsite/rst/dev_guide/testing.rst
Normal file
199
docs/docsite/rst/dev_guide/testing.rst
Normal file
|
@ -0,0 +1,199 @@
|
||||||
|
***************
|
||||||
|
Testing Ansible
|
||||||
|
***************
|
||||||
|
|
||||||
|
.. contents:: Topics
|
||||||
|
|
||||||
|
Introduction
|
||||||
|
============
|
||||||
|
|
||||||
|
This document describes:
|
||||||
|
|
||||||
|
* how Ansible is tested
|
||||||
|
* how to test Ansible locally
|
||||||
|
* how to extend the testing capabilities
|
||||||
|
|
||||||
|
Types of tests
|
||||||
|
==============
|
||||||
|
|
||||||
|
At a high level we have the following classifications of tests:
|
||||||
|
|
||||||
|
:compile:
|
||||||
|
* :doc:`testing_compile`
|
||||||
|
* Test python code against a variety of Python versions.
|
||||||
|
:sanity:
|
||||||
|
* :doc:`testing_sanity`
|
||||||
|
* Sanity tests are made up of scripts and tools used to perform static code analysis.
|
||||||
|
* The primary purpose of these tests is to enforce Ansible coding standards and requirements.
|
||||||
|
:integration:
|
||||||
|
* :doc:`testing_integration`
|
||||||
|
* Functional tests of modules and Ansible core functionality.
|
||||||
|
:units:
|
||||||
|
* :doc:`testing_units`
|
||||||
|
* Tests directly against individual parts of the code base.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Link to Manual testing of PRs (testing_pullrequests.rst)
|
||||||
|
If you're a developer, one of the most valuable things you can do is look at the GitHub issues list and help fix bugs. We almost always prioritize bug fixing over feature development, so helping to fix bugs is one of the best things you can do.
|
||||||
|
|
||||||
|
Even if you're not a developer, helping to test pull requests for bug fixes and features is still immensely valuable.
|
||||||
|
|
||||||
|
|
||||||
|
Testing within GitHub & Shippable
|
||||||
|
=================================
|
||||||
|
|
||||||
|
|
||||||
|
Organization
|
||||||
|
------------
|
||||||
|
|
||||||
|
When Pull Requests (PRs) are created they are tested using Shippable, a Continuous Integration (CI) tool. Results are shown at the end of every PR.
|
||||||
|
|
||||||
|
|
||||||
|
When Shippable detects an error and it can be linked back to a file that has been modified in the PR then the relevant lines will be added as a GitHub comment. For example::
|
||||||
|
|
||||||
|
The test `ansible-test sanity --test pep8` failed with the following errors:
|
||||||
|
|
||||||
|
lib/ansible/modules/network/foo/bar.py:509:17: E265 block comment should start with '# '
|
||||||
|
|
||||||
|
The test `ansible-test sanity --test validate-modules` failed with the following errors:
|
||||||
|
lib/ansible/modules/network/foo/bar.py:0:0: E307 version_added should be 2.4. Currently 2.3
|
||||||
|
lib/ansible/modules/network/foo/bar.py:0:0: E316 ANSIBLE_METADATA.metadata_version: required key not provided @ data['metadata_version']. Got None
|
||||||
|
|
||||||
|
From the above example we can see that ``--test pep8`` and ``--test validate-modules`` have identified issues. The commands given allow you to run the same tests locally to ensure you've fixed the issues without having to push your changed to GitHub and wait for Shippable, for example:
|
||||||
|
|
||||||
|
TBD
|
||||||
|
|
||||||
|
If you haven't already got Ansible available, use the local checkout by running::
|
||||||
|
|
||||||
|
source hacking/env-setup
|
||||||
|
|
||||||
|
Then run the tests detailed in the GitHub comment::
|
||||||
|
|
||||||
|
ansible-test sanity --test pep8
|
||||||
|
ansible-test sanity --test validate-modules
|
||||||
|
|
||||||
|
|
||||||
|
If there isn't a GitHub comment stating what's failed you can inspect the results by clicking on the "Details" button under the "checks have failed" message at the end of the PR.
|
||||||
|
|
||||||
|
Rerunning a failing CI job
|
||||||
|
--------------------------
|
||||||
|
|
||||||
|
Occasionally you may find your PR fails due to a reason unrelated to your change. This could happen for several reasons, including:
|
||||||
|
|
||||||
|
* a temporary issue accessing an external resource, such as a yum or git repo
|
||||||
|
* a timeout creating a virtual machine to run the tests on
|
||||||
|
|
||||||
|
If either of these issues appear to be the case, you can rerun the Shippable test by:
|
||||||
|
|
||||||
|
* closing and re-opening the PR
|
||||||
|
* making another change to the PR and pushing to GitHub
|
||||||
|
|
||||||
|
If the issue persists, please contact us in ``#ansible-devel`` on Freenode IRC.
|
||||||
|
|
||||||
|
|
||||||
|
How to test a PR
|
||||||
|
================
|
||||||
|
|
||||||
|
If you're a developer, one of the most valuable things you can do is look at the GitHub issues list and help fix bugs. We almost always prioritize bug fixing over feature development, so helping to fix bugs is one of the best things you can do.
|
||||||
|
|
||||||
|
Even if you're not a developer, helping to test pull requests for bug fixes and features is still immensely valuable.
|
||||||
|
|
||||||
|
Ideally, code should add tests that prove that the code works. That's not always possible and tests are not always comprehensive, especially when a user doesn't have access to a wide variety of platforms, or is using an API or web service. In these cases, live testing against real equipment can be more valuable than automation that runs against simulated interfaces. In any case, things should always be tested manually the first time as well.
|
||||||
|
|
||||||
|
Thankfully, helping to test Ansible is pretty straightforward, assuming you are familiar with how Ansible works.
|
||||||
|
|
||||||
|
Setup: Checking out a Pull Request
|
||||||
|
----------------------------------
|
||||||
|
|
||||||
|
You can do this by:
|
||||||
|
|
||||||
|
* checking out Ansible
|
||||||
|
* making a test branch off the main branch
|
||||||
|
* merging a GitHub issue
|
||||||
|
* testing
|
||||||
|
* commenting on that particular issue on GitHub
|
||||||
|
|
||||||
|
Here's how:
|
||||||
|
|
||||||
|
.. warning::
|
||||||
|
Testing source code from GitHub pull requests sent to us does have some inherent risk, as the source code
|
||||||
|
sent may have mistakes or malicious code that could have a negative impact on your system. We recommend
|
||||||
|
doing all testing on a virtual machine, whether a cloud instance, or locally. Some users like Vagrant
|
||||||
|
or Docker for this, but they are optional. It is also useful to have virtual machines of different Linux or
|
||||||
|
other flavors, since some features (apt vs. yum, for example) are specific to those OS versions.
|
||||||
|
|
||||||
|
|
||||||
|
Create a fresh area to work::
|
||||||
|
|
||||||
|
|
||||||
|
git clone https://github.com/ansible/ansible.git ansible-pr-testing
|
||||||
|
cd ansible-pr-testing
|
||||||
|
|
||||||
|
Next, find the pull request you'd like to test and make note of the line at the top which describes the source
|
||||||
|
and destination repositories. It will look something like this::
|
||||||
|
|
||||||
|
Someuser wants to merge 1 commit into ansible:devel from someuser:feature_branch_name
|
||||||
|
|
||||||
|
.. note:: Only test ``ansible:devel``
|
||||||
|
It is important that the PR request target be ansible:devel, as we do not accept pull requests into any other branch. Dot releases are cherry-picked manually by Ansible staff.
|
||||||
|
|
||||||
|
The username and branch at the end are the important parts, which will be turned into git commands as follows::
|
||||||
|
|
||||||
|
git checkout -b testing_PRXXXX devel
|
||||||
|
git pull https://github.com/someuser/ansible.git feature_branch_name
|
||||||
|
|
||||||
|
The first command creates and switches to a new branch named ``testing_PRXXXX``, where the XXXX is the actual issue number associated with the pull request (for example, 1234). This branch is based on the ``devel`` branch. The second command pulls the new code from the users feature branch into the newly created branch.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
If the GitHub user interface shows that the pull request will not merge cleanly, we do not recommend proceeding if you are not somewhat familiar with git and coding, as you will have to resolve a merge conflict. This is the responsibility of the original pull request contributor.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
Some users do not create feature branches, which can cause problems when they have multiple, unrelated commits in their version of ``devel``. If the source looks like ``someuser:devel``, make sure there is only one commit listed on the pull request.
|
||||||
|
|
||||||
|
The Ansible source includes a script that allows you to use Ansible directly from source without requiring a
|
||||||
|
full installation that is frequently used by developers on Ansible.
|
||||||
|
|
||||||
|
Simply source it (to use the Linux/Unix terminology) to begin using it immediately::
|
||||||
|
|
||||||
|
source ./hacking/env-setup
|
||||||
|
|
||||||
|
This script modifies the ``PYTHONPATH`` environment variables (along with a few other things), which will be temporarily
|
||||||
|
set as long as your shell session is open.
|
||||||
|
|
||||||
|
Testing the Pull Request
|
||||||
|
------------------------
|
||||||
|
|
||||||
|
At this point, you should be ready to begin testing!
|
||||||
|
|
||||||
|
Some ideas of what to test are:
|
||||||
|
|
||||||
|
* Create a test Playbook with the examples in and check if they function correctly
|
||||||
|
* Test to see if any Python backtraces returned (that's a bug)
|
||||||
|
* Test on different operating systems, or against different library versions
|
||||||
|
|
||||||
|
|
||||||
|
Any potential issues should be added as comments on the pull request (and it's acceptable to comment if the feature works as well), remembering to include the output of ``ansible --version``
|
||||||
|
|
||||||
|
Example::
|
||||||
|
|
||||||
|
Works for me! Tested on `Ansible 2.3.0`. I verified this on CentOS 6.5 and also Ubuntu 14.04.
|
||||||
|
|
||||||
|
If the PR does not resolve the issue, or if you see any failures from the unit/integration tests, just include that output instead:
|
||||||
|
|
||||||
|
| This doesn't work for me.
|
||||||
|
|
|
||||||
|
| When I ran this Ubuntu 16.04 it failed with the following:
|
||||||
|
|
|
||||||
|
| \```
|
||||||
|
| BLARG
|
||||||
|
| StrackTrace
|
||||||
|
| RRRARRGGG
|
||||||
|
| \```
|
||||||
|
|
||||||
|
Want to know more about testing?
|
||||||
|
================================
|
||||||
|
|
||||||
|
If you'd like to know more about the plans for improving testing Ansible then why not join the `Testing Working Group <https://github.com/ansible/community/blob/master/MEETINGS.md>`_.
|
||||||
|
|
69
docs/docsite/rst/dev_guide/testing_compile.rst
Normal file
69
docs/docsite/rst/dev_guide/testing_compile.rst
Normal file
|
@ -0,0 +1,69 @@
|
||||||
|
*************
|
||||||
|
Compile Tests
|
||||||
|
*************
|
||||||
|
|
||||||
|
.. contents:: Topics
|
||||||
|
|
||||||
|
Overview
|
||||||
|
========
|
||||||
|
|
||||||
|
Compile tests check source files for valid syntax on all supported python versions:
|
||||||
|
|
||||||
|
- 2.4 (Ansible 2.3 only)
|
||||||
|
- 2.6
|
||||||
|
- 2.7
|
||||||
|
- 3.5
|
||||||
|
- 3.6
|
||||||
|
|
||||||
|
Running compile tests locally
|
||||||
|
=============================
|
||||||
|
|
||||||
|
Unit tests can be run across the whole code base by doing:
|
||||||
|
|
||||||
|
.. code:: shell
|
||||||
|
|
||||||
|
cd /path/to/ansible/source
|
||||||
|
source hacking/env-setup
|
||||||
|
ansible-test compile
|
||||||
|
|
||||||
|
Against a single file by doing:
|
||||||
|
|
||||||
|
.. code:: shell
|
||||||
|
|
||||||
|
ansible-test compile lineinfile
|
||||||
|
|
||||||
|
Or against a specific Python version by doing:
|
||||||
|
|
||||||
|
.. code:: shell
|
||||||
|
|
||||||
|
ansible-test compile --python 2.7 lineinfile
|
||||||
|
|
||||||
|
For advanced usage see the online help:
|
||||||
|
|
||||||
|
.. code:: shell
|
||||||
|
|
||||||
|
ansible-test units --help
|
||||||
|
|
||||||
|
For advanced options see ``ansible-test compile --help``
|
||||||
|
|
||||||
|
|
||||||
|
Installing dependencies
|
||||||
|
=======================
|
||||||
|
|
||||||
|
``ansible-test`` has a number of dependencies , for ``compile`` tests we suggest running the tests with ``--local``, which is the default
|
||||||
|
|
||||||
|
The dependencies can be installed using the ``-requirements`` argument. For example:
|
||||||
|
|
||||||
|
.. code:: shell
|
||||||
|
|
||||||
|
ansible-test units --requirements lineinfile
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
The full list of requirements can be found at `test/runner/requirements <https://github.com/ansible/ansible/tree/devel/test/runner/requirements>`_. Requirements files are named after their respective commands. See also the `constraints <https://github.com/ansible/ansible/blob/devel/test/runner/requirements/constraints.txt>`_ applicable to all commands.
|
||||||
|
|
||||||
|
|
||||||
|
Extending compile tests
|
||||||
|
=======================
|
||||||
|
|
||||||
|
If you believe changes are needed to the Compile tests please add a comment on the `Testing Working Group Agenda <https://github.com/ansible/community/blob/master/MEETINGS.md>`_ so it can be discussed.
|
72
docs/docsite/rst/dev_guide/testing_httptester.rst
Normal file
72
docs/docsite/rst/dev_guide/testing_httptester.rst
Normal file
|
@ -0,0 +1,72 @@
|
||||||
|
**********
|
||||||
|
httptester
|
||||||
|
**********
|
||||||
|
|
||||||
|
.. contents:: Topics
|
||||||
|
|
||||||
|
Overview
|
||||||
|
========
|
||||||
|
|
||||||
|
``httptester`` is a docker container used to host certain resources required by :doc:`testing_integration`. This is to avoid CI tests requiring external resources (such as git or package repos) which, if temporarily unavailable, would cause tests to fail.
|
||||||
|
|
||||||
|
HTTP Testing endpoint which provides the following capabilities:
|
||||||
|
|
||||||
|
* httpbin
|
||||||
|
* nginx
|
||||||
|
* SSL
|
||||||
|
* SNI
|
||||||
|
|
||||||
|
|
||||||
|
Source files can be found at `test/utils/docker/httptester/ <https://github.com/ansible/ansible/tree/devel/test/utils/docker/httptester>`_
|
||||||
|
|
||||||
|
Building
|
||||||
|
========
|
||||||
|
|
||||||
|
Docker
|
||||||
|
------
|
||||||
|
|
||||||
|
Both ways of building ``docker`` utilize the ``nginx:alpine`` image, but can
|
||||||
|
be customized for ``Fedora``, ``Red Hat``, ``CentOS``, ``Ubuntu``,
|
||||||
|
``Debian`` and other variants of ``Alpine``
|
||||||
|
|
||||||
|
When utilizing ``packer`` or configuring with ``ansible-playbook``,
|
||||||
|
the services will not automatically start on launch, and will have to be
|
||||||
|
manually started using::
|
||||||
|
|
||||||
|
cd test/utils/docker/httptester
|
||||||
|
./services.sh
|
||||||
|
|
||||||
|
Such as when starting a docker container::
|
||||||
|
|
||||||
|
cd test/utils/docker/httptester
|
||||||
|
docker run -ti --rm -p 80:80 -p 443:443 --name httptester ansible/ansible:httptester /services.sh
|
||||||
|
|
||||||
|
docker build
|
||||||
|
------------
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
cd test/utils/docker/httptester
|
||||||
|
docker build -t ansible/ansible:httptester .
|
||||||
|
|
||||||
|
packer
|
||||||
|
------
|
||||||
|
|
||||||
|
The ``packer`` build will use ``ansible-playbook`` to perform the
|
||||||
|
configuration, and will tag the image as ``ansible/ansible:httptester``::
|
||||||
|
|
||||||
|
cd test/utils/docker/httptester
|
||||||
|
packer build packer.json
|
||||||
|
|
||||||
|
Ansible
|
||||||
|
=======
|
||||||
|
|
||||||
|
::
|
||||||
|
cd test/utils/docker/httptester
|
||||||
|
ansible-playbook -i hosts -v httptester.yml
|
||||||
|
|
||||||
|
|
||||||
|
Extending httptester
|
||||||
|
====================
|
||||||
|
|
||||||
|
If you have sometime to improve ``httptester`` please add a comment on the `Testing Working Group Agenda <https://github.com/ansible/community/blob/master/MEETINGS.md>`_ to avoid duplicated effort.
|
235
docs/docsite/rst/dev_guide/testing_integration.rst
Normal file
235
docs/docsite/rst/dev_guide/testing_integration.rst
Normal file
|
@ -0,0 +1,235 @@
|
||||||
|
*****************
|
||||||
|
Integration tests
|
||||||
|
*****************
|
||||||
|
|
||||||
|
.. contents:: Topics
|
||||||
|
|
||||||
|
The Ansible integration Test system.
|
||||||
|
|
||||||
|
Tests for playbooks, by playbooks.
|
||||||
|
|
||||||
|
Some tests may require credentials. Credentials may be specified with `credentials.yml`.
|
||||||
|
|
||||||
|
Some tests may require root.
|
||||||
|
|
||||||
|
Quick Start
|
||||||
|
===========
|
||||||
|
|
||||||
|
It is highly recommended that you install and activate the ``argcomplete`` python package.
|
||||||
|
It provides tab completion in ``bash`` for the ``ansible-test`` test runner.
|
||||||
|
|
||||||
|
To get started quickly using Docker containers for testing,
|
||||||
|
see [Tests in Docker containers](#tests-in-docker-containers).
|
||||||
|
|
||||||
|
Configuration
|
||||||
|
=============
|
||||||
|
|
||||||
|
Making your own version of ``integration_config.yml`` can allow for setting some
|
||||||
|
tunable parameters to help run the tests better in your environment. Some
|
||||||
|
tests (e.g. cloud) will only run when access credentials are provided. For
|
||||||
|
more information about supported credentials, refer to ``credentials.template``.
|
||||||
|
|
||||||
|
Prerequisites
|
||||||
|
=============
|
||||||
|
|
||||||
|
The tests will assume things like hg, svn, and git are installed and in path.
|
||||||
|
|
||||||
|
(Complete list pending)
|
||||||
|
|
||||||
|
Non-destructive Tests
|
||||||
|
=====================
|
||||||
|
|
||||||
|
These tests will modify files in subdirectories, but will not do things that install or remove packages or things
|
||||||
|
outside of those test subdirectories. They will also not reconfigure or bounce system services.
|
||||||
|
|
||||||
|
.. note:: Running integration tests within Docker
|
||||||
|
|
||||||
|
To protect your system from any potential changes caused by integration tests, and to ensure the a sensible set of dependencies are available we recommend that you always run integration tests with the ``--docker`` option. See the `list of supported docker images <https://github.com/ansible/ansible/blob/devel/test/runner/completion/docker.txt>`_ for options.
|
||||||
|
|
||||||
|
.. note:: Avoiding pulling new Docker images:
|
||||||
|
|
||||||
|
Use the ``--docker-no-pull`` option to avoid pulling the latest container image. This is required when using custom local images that are not available for download.
|
||||||
|
|
||||||
|
Run as follows for all POSIX platform tests executed by our CI system::
|
||||||
|
|
||||||
|
test/runner/ansible-test integration --docker fedora25 -v posix/ci/
|
||||||
|
|
||||||
|
You can select specific tests as well, such as for individual modules::
|
||||||
|
|
||||||
|
test/runner/ansible-test integration -v ping
|
||||||
|
|
||||||
|
By installing ``argcomplete`` you can obtain a full list by doing::
|
||||||
|
|
||||||
|
test/runner/ansible-test integration <tab><tab>
|
||||||
|
|
||||||
|
Destructive Tests
|
||||||
|
=================
|
||||||
|
|
||||||
|
These tests are allowed to install and remove some trivial packages. You will likely want to devote these
|
||||||
|
to a virtual environment, such as Docker. They won't reformat your filesystem::
|
||||||
|
|
||||||
|
test/runner/ansible-test integration --docker fedora25 -v destructive/
|
||||||
|
|
||||||
|
Windows Tests
|
||||||
|
=============
|
||||||
|
|
||||||
|
These tests exercise the ``winrm`` connection plugin and Windows modules. You'll
|
||||||
|
need to define an inventory with a remote Windows 2008 or 2012 Server to use
|
||||||
|
for testing, and enable PowerShell Remoting to continue.
|
||||||
|
|
||||||
|
Running these tests may result in changes to your Windows host, so don't run
|
||||||
|
them against a production/critical Windows environment.
|
||||||
|
|
||||||
|
Enable PowerShell Remoting (run on the Windows host via Remote Desktop):
|
||||||
|
Enable-PSRemoting -Force
|
||||||
|
|
||||||
|
Define Windows inventory::
|
||||||
|
|
||||||
|
cp inventory.winrm.template inventory.winrm
|
||||||
|
${EDITOR:-vi} inventory.winrm
|
||||||
|
|
||||||
|
Run the Windows tests executed by our CI system::
|
||||||
|
|
||||||
|
test/runner/ansible-test windows-integration -v windows/ci/
|
||||||
|
|
||||||
|
Tests in Docker containers
|
||||||
|
==========================
|
||||||
|
|
||||||
|
If you have a Linux system with Docker installed, running integration tests using the same Docker containers used by
|
||||||
|
the Ansible continuous integration (CI) system is recommended.
|
||||||
|
|
||||||
|
.. note: Docker on non-Linux::
|
||||||
|
|
||||||
|
Using Docker Engine to run Docker on a non-Linux host is not recommended.
|
||||||
|
Some tests may fail, depending on the image used for testing.
|
||||||
|
Using the ``--docker-privileged`` option may resolve the issue.
|
||||||
|
|
||||||
|
Running Integration Tests
|
||||||
|
-------------------------
|
||||||
|
|
||||||
|
To run all CI integration test targets for POSIX platforms in a Ubuntu 16.04 container::
|
||||||
|
|
||||||
|
test/runner/ansible-test integration -v posix/ci/ --docker
|
||||||
|
|
||||||
|
You can also run specific tests or select a different Linux distribution.
|
||||||
|
For example, to run tests for the ``ping`` module on a Ubuntu 14.04 container::
|
||||||
|
|
||||||
|
test/runner/ansible-test integration -v ping --docker ubuntu1404
|
||||||
|
|
||||||
|
Container Images
|
||||||
|
----------------
|
||||||
|
|
||||||
|
Python 2
|
||||||
|
````````
|
||||||
|
|
||||||
|
Most container images are for testing with Python 2:
|
||||||
|
|
||||||
|
- centos6
|
||||||
|
- centos7
|
||||||
|
- fedora24
|
||||||
|
- fedora25
|
||||||
|
- opensuse42.1
|
||||||
|
- opensuse42.2
|
||||||
|
- ubuntu1204
|
||||||
|
- ubuntu1404
|
||||||
|
- ubuntu1604
|
||||||
|
|
||||||
|
Python 3
|
||||||
|
````````
|
||||||
|
|
||||||
|
To test with Python 3 use the following images:
|
||||||
|
|
||||||
|
- ubuntu1604py3
|
||||||
|
|
||||||
|
Network Tests
|
||||||
|
=============
|
||||||
|
|
||||||
|
This page details the specifics around testing Ansible Networking modules.
|
||||||
|
|
||||||
|
|
||||||
|
.. important:: Network testing requirements for Ansible 2.4
|
||||||
|
|
||||||
|
Starting with Ansible 2.4, all network modules MUST include corresponding unit tests to defend functionality.
|
||||||
|
The unit tests must be added in the same PR that includes the new network module, or extends functionality.
|
||||||
|
Integration tests, although not required, are a welcome addition.
|
||||||
|
How to do this is explained in the rest of this document.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Network integration tests can be ran by doing::
|
||||||
|
|
||||||
|
cd test/integration
|
||||||
|
ANSIBLE_ROLES_PATH=targets ansible-playbook network-all.yaml
|
||||||
|
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
* To run the network tests you will need a number of test machines and suitably configured inventory file. A sample is included in ``test/integration/inventory.network``
|
||||||
|
* As with the rest of the integration tests, they can be found grouped by module in ``test/integration/targets/MODULENAME/``
|
||||||
|
|
||||||
|
To filter a set of test cases set ``limit_to`` to the name of the group, generally this is the name of the module::
|
||||||
|
|
||||||
|
ANSIBLE_ROLES_PATH=targets ansible-playbook -i inventory.network network-all.yaml -e "limit_to=eos_command"
|
||||||
|
|
||||||
|
|
||||||
|
To filter a singular test case set the tags options to eapi or cli, set limit_to to the test group,
|
||||||
|
and test_cases to the name of the test::
|
||||||
|
|
||||||
|
ANSIBLE_ROLES_PATH=targets ansible-playbook -i inventory.network network-all.yaml --tags="cli" -e "limit_to=eos_command test_case=notequal"
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Writing network integration tests
|
||||||
|
---------------------------------
|
||||||
|
|
||||||
|
Test cases are added to roles based on the module being testing. Test cases
|
||||||
|
should include both `cli` and `eapi` test cases. Cli test cases should be
|
||||||
|
added to `test/integration/targets/modulename/tests/cli` and eapi tests should be added to
|
||||||
|
`test/integration/targets/modulename/tests/eapi`.
|
||||||
|
|
||||||
|
In addition to positive testing, negative tests are required to ensure user friendly warnings & errors are generated, rather than backtraces, for example:
|
||||||
|
|
||||||
|
.. code-block: yaml
|
||||||
|
|
||||||
|
- name: test invalid subset (foobar)
|
||||||
|
eos_facts:
|
||||||
|
provider: "{{ cli }}"
|
||||||
|
gather_subset:
|
||||||
|
- "foobar"
|
||||||
|
register: result
|
||||||
|
ignore_errors: true
|
||||||
|
|
||||||
|
- assert:
|
||||||
|
that:
|
||||||
|
# Failures shouldn't return changes
|
||||||
|
- "result.changed == false"
|
||||||
|
# It's a failure
|
||||||
|
- "result.failed == true"
|
||||||
|
# Sensible Failure message
|
||||||
|
- "'Subset must be one of' in result.msg"
|
||||||
|
|
||||||
|
|
||||||
|
Conventions
|
||||||
|
```````````
|
||||||
|
|
||||||
|
- Each test case should generally follow the pattern:
|
||||||
|
|
||||||
|
setup —> test —> assert —> test again (idempotent) —> assert —> teardown (if needed) -> done
|
||||||
|
|
||||||
|
This keeps test playbooks from becoming monolithic and difficult to
|
||||||
|
troubleshoot.
|
||||||
|
|
||||||
|
- Include a name for each task that is not an assertion. (It's OK to add names
|
||||||
|
to assertions too. But to make it easy to identify the broken task within a failed
|
||||||
|
test, at least provide a helpful name for each task.)
|
||||||
|
|
||||||
|
- Files containing test cases must end in `.yaml`
|
||||||
|
|
||||||
|
|
||||||
|
Adding a new Network Platform
|
||||||
|
`````````````````````````````
|
||||||
|
|
||||||
|
A top level playbook is required such as `ansible/test/integration/eos.yaml` which needs to be references by `ansible/test/integration/network-all.yaml`
|
||||||
|
|
||||||
|
Where to find out more
|
||||||
|
======================
|
77
docs/docsite/rst/dev_guide/testing_integration_legacy.rst
Normal file
77
docs/docsite/rst/dev_guide/testing_integration_legacy.rst
Normal file
|
@ -0,0 +1,77 @@
|
||||||
|
*******************************************
|
||||||
|
Testing using the Legacy Integration system
|
||||||
|
*******************************************
|
||||||
|
|
||||||
|
.. contents:: Topics
|
||||||
|
|
||||||
|
This page details how to run the integration tests that haven't been ported to the new ``ansible-test`` framework.
|
||||||
|
|
||||||
|
The following areas are still tested using the legacy ``make tests`` command:
|
||||||
|
|
||||||
|
* amazon
|
||||||
|
* azure
|
||||||
|
* cloudflare
|
||||||
|
* cloudscale
|
||||||
|
* cloudstack
|
||||||
|
* consul
|
||||||
|
* exoscale
|
||||||
|
* gce
|
||||||
|
* jenkins
|
||||||
|
* rackspace
|
||||||
|
|
||||||
|
Over time the above list will be reduced as tests are ported to the ``ansible-test`` framework.
|
||||||
|
|
||||||
|
|
||||||
|
Running Cloud Tests
|
||||||
|
====================
|
||||||
|
|
||||||
|
Cloud tests exercise capabilities of cloud modules (e.g. ec2_key). These are
|
||||||
|
not 'tests run in the cloud' so much as tests that leverage the cloud modules
|
||||||
|
and are organized by cloud provider.
|
||||||
|
|
||||||
|
Some AWS tests may use environment variables. It is recommended to either unset any AWS environment variables( such as ``AWS_DEFAULT_PROFILE``, ``AWS_SECRET_ACCESS_KEY``, etc) or be sure that the environment variables match the credentials provided in ``credentials.yml`` to ensure the tests run with consistency to their full capability on the expected account . See `AWS CLI docs <http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html>`_ for information on creating a profile.
|
||||||
|
|
||||||
|
Subsets of tests may be run by ``#commenting`` out unnecessary roles in the appropriate playbook, such as ``test/integration/amazon.yml``.
|
||||||
|
|
||||||
|
In order to run cloud tests, you must provide access credentials in a file
|
||||||
|
named ``credentials.yml``. A sample credentials file named
|
||||||
|
``credentials.template`` is available for syntax help.
|
||||||
|
|
||||||
|
|
||||||
|
Provide cloud credentials::
|
||||||
|
|
||||||
|
cp credentials.template credentials.yml
|
||||||
|
${EDITOR:-vi} credentials.yml
|
||||||
|
|
||||||
|
|
||||||
|
Other configuration
|
||||||
|
===================
|
||||||
|
In order to run some tests, you must provide access credentials in a file
|
||||||
|
named ``credentials.yml``. A sample credentials file named
|
||||||
|
``credentials.template`` is available for syntax help.
|
||||||
|
|
||||||
|
Running Tests
|
||||||
|
=============
|
||||||
|
|
||||||
|
The tests are invoked via a ``Makefile``::
|
||||||
|
|
||||||
|
# If you haven't already got Ansible available use the local checkout by doing::
|
||||||
|
|
||||||
|
source hacking/env-setup
|
||||||
|
|
||||||
|
cd test/integration/
|
||||||
|
# TARGET is the name of the test from the list at the top of this page
|
||||||
|
#make TARGET
|
||||||
|
# e.g.
|
||||||
|
make amazon
|
||||||
|
# To run all cloud tests you can do:
|
||||||
|
make cloud
|
||||||
|
|
||||||
|
.. warning:: Possible cost of running cloud tests
|
||||||
|
|
||||||
|
Running cloud integration tests will create and destroy cloud
|
||||||
|
resources. Running these tests may result in additional fees associated with
|
||||||
|
your cloud account. Care is taken to ensure that created resources are
|
||||||
|
removed. However, it is advisable to inspect your AWS console to ensure no
|
||||||
|
unexpected resources are running.
|
||||||
|
|
55
docs/docsite/rst/dev_guide/testing_pep8.rst
Normal file
55
docs/docsite/rst/dev_guide/testing_pep8.rst
Normal file
|
@ -0,0 +1,55 @@
|
||||||
|
*****
|
||||||
|
PEP 8
|
||||||
|
*****
|
||||||
|
|
||||||
|
.. contents:: Topics
|
||||||
|
|
||||||
|
`PEP 8`_ style guidelines are enforced by ``pep8`` on all python files in the repository by default.
|
||||||
|
|
||||||
|
Current Rule Set
|
||||||
|
================
|
||||||
|
|
||||||
|
By default all files are tested using the current rule set.
|
||||||
|
All ``pep8`` tests are executed, except those listed in the `current ignore list`_.
|
||||||
|
|
||||||
|
.. warning: Updating the Rule Set
|
||||||
|
|
||||||
|
Changes to the Rule Set need approval from the Core Team, and must be done via the `Testing Working Group <https://github.com/ansible/community/blob/master/MEETINGS.md>`_.
|
||||||
|
|
||||||
|
Legacy Rule Set
|
||||||
|
===============
|
||||||
|
|
||||||
|
Files which are listed in the `legacy file list`_ are tested using the legacy rule set.
|
||||||
|
|
||||||
|
All ``pep8`` tests are executed, except those listed in the `current ignore list`_ or `legacy ignore list`_.
|
||||||
|
|
||||||
|
Files listed in the legacy file list which pass the current rule set will result in an error.
|
||||||
|
|
||||||
|
This is intended to prevent regressions on style guidelines for files which pass the more stringent current rule set.
|
||||||
|
|
||||||
|
Skipping Tests
|
||||||
|
==============
|
||||||
|
|
||||||
|
Files listed in the `skip list`_ are not tested by ``pep8``.
|
||||||
|
|
||||||
|
Removed Files
|
||||||
|
=============
|
||||||
|
|
||||||
|
Files which have been removed from the repository must be removed from the legacy file list and the skip list.
|
||||||
|
|
||||||
|
Running Locally
|
||||||
|
===============
|
||||||
|
|
||||||
|
The pep8 check can be run locally with::
|
||||||
|
|
||||||
|
|
||||||
|
./test/runner/ansible-test sanity --test pep8 [file-or-directory-path-to-check] ...
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
.. _PEP 8: https://www.python.org/dev/peps/pep-0008/
|
||||||
|
.. _pep8: https://pypi.python.org/pypi/pep8
|
||||||
|
.. _current ignore list: https://github.com/ansible/ansible/blob/devel/test/sanity/pep8/current-ignore.txt
|
||||||
|
.. _legacy file list: https://github.com/ansible/ansible/blob/devel/test/sanity/pep8/legacy-files.txt
|
||||||
|
.. _legacy ignore list: https://github.com/ansible/ansible/blob/devel/test/sanity/pep8/legacy-ignore.txt
|
||||||
|
.. _skip list: https://github.com/ansible/ansible/blob/devel/test/sanity/pep8/skip.txt
|
|
@ -1,7 +1,13 @@
|
||||||
|
***************
|
||||||
Testing Ansible
|
Testing Ansible
|
||||||
===============
|
***************
|
||||||
|
|
||||||
How to run and create tests for the Ansible core engine and modules with ``ansible-test``.
|
.. contents:: Topics
|
||||||
|
|
||||||
|
This page describes how to:
|
||||||
|
|
||||||
|
* Run tests locally using ``ansible-test``
|
||||||
|
* Extend
|
||||||
|
|
||||||
Requirements
|
Requirements
|
||||||
============
|
============
|
||||||
|
@ -13,15 +19,18 @@ The requirements for each ``ansible-test`` command are covered later.
|
||||||
Setup
|
Setup
|
||||||
=====
|
=====
|
||||||
|
|
||||||
#. Fork the `ansible/ansible <https://github.com/ansible/ansible/>`_ repository on Git Hub.
|
The code and tests are in the same GitHub repository, to get a local copy do
|
||||||
#. Clone your fork: ``git clone git@github.com:USERNAME/ansible.git``
|
|
||||||
#. Install the optional ``argcomplete`` package for tab completion (highly recommended):
|
|
||||||
|
|
||||||
#. ``pip install argcomplete``
|
#. Fork the `ansible/ansible <https://github.com/ansible/ansible/>`_ repository on GitHub.
|
||||||
#. ``activate-global-python-argcomplete``
|
#. Clone your fork: ``git clone git@github.com:USERNAME/ansible.git``
|
||||||
#. Restart your shell to complete global activation.
|
#. Install the optional ``argcomplete`` package for tab completion (highly recommended)::
|
||||||
|
|
||||||
|
pip install argcomplete
|
||||||
|
activate-global-python-argcomplete
|
||||||
|
# Restart your shell to complete global activation.
|
||||||
|
|
||||||
#. Configure your environment to run from your clone (once per shell): ``. hacking/env-setup``
|
#. Configure your environment to run from your clone (once per shell): ``. hacking/env-setup``
|
||||||
|
#. ``ansible``, ``ansible-playbook`` and ``ansible-test`` will now be in your ``PATH``
|
||||||
|
|
||||||
Test Environments
|
Test Environments
|
||||||
=================
|
=================
|
||||||
|
@ -81,7 +90,7 @@ An API key is required to use this feature.
|
||||||
|
|
||||||
Recommended for integration tests.
|
Recommended for integration tests.
|
||||||
|
|
||||||
See the `list of supported platforms and versions <runner/completion/remote.txt>`_ for additional details.
|
See the `list of supported platforms and versions <https://github.com/ansible/ansible/blob/devel/test/runner/completion/remote.txt>`_ for additional details.
|
||||||
|
|
||||||
General Usage
|
General Usage
|
||||||
=============
|
=============
|
|
@ -1,5 +1,8 @@
|
||||||
|
************
|
||||||
Sanity Tests
|
Sanity Tests
|
||||||
============
|
************
|
||||||
|
|
||||||
|
.. contents:: Topics
|
||||||
|
|
||||||
Sanity tests are made up of scripts and tools used to perform static code analysis.
|
Sanity tests are made up of scripts and tools used to perform static code analysis.
|
||||||
The primary purpose of these tests is to enforce Ansible coding standards and requirements.
|
The primary purpose of these tests is to enforce Ansible coding standards and requirements.
|
||||||
|
@ -12,18 +15,18 @@ Available Tests
|
||||||
|
|
||||||
Tests can be listed with ``ansible-test sanity --list-tests``.
|
Tests can be listed with ``ansible-test sanity --list-tests``.
|
||||||
|
|
||||||
This list is a combination of two different categories of tests.
|
This list is a combination of two different categories of tests, "Code Smell" and "Built-in".
|
||||||
|
|
||||||
Code Smell Tests
|
Code Smell Tests
|
||||||
----------------
|
----------------
|
||||||
|
|
||||||
Miscellaneous `scripts <code-smell/>`_ used for enforcing coding standards and requirements, identifying trip hazards, etc.
|
Miscellaneous `scripts <https://github.com/ansible/ansible/tree/devel/test/sanity/code-smell/>`_ used for enforcing coding standards and requirements, identifying trip hazards, etc.
|
||||||
|
|
||||||
These tests are listed and accessed by script name. There is no actual test named ``code-smell``.
|
These tests are listed and accessed by script name. There is no actual test named ``code-smell``.
|
||||||
|
|
||||||
All executable scripts added to the ``code-smell`` directory are automatically detected and executed by ``ansible-test``.
|
All executable scripts added to the ``code-smell`` directory are automatically detected and executed by ``ansible-test``.
|
||||||
|
|
||||||
Scripts in the directory which fail can be skipped by adding them to `skip.txt <code-smell/skip.txt>`_.
|
Scripts in the directory which fail can be skipped by adding them to `skip.txt <https://github.com/ansible/ansible/blob/devel/test/sanity/code-smell/skip.txt>`_.
|
||||||
This is useful for scripts which identify issues that have not yet been resolved in the code base.
|
This is useful for scripts which identify issues that have not yet been resolved in the code base.
|
||||||
|
|
||||||
Files tested are specific to the individual test scripts and are not affected by command line arguments.
|
Files tested are specific to the individual test scripts and are not affected by command line arguments.
|
||||||
|
@ -34,6 +37,8 @@ Built-in Tests
|
||||||
These tests are integrated directly into ``ansible-test``.
|
These tests are integrated directly into ``ansible-test``.
|
||||||
All files relevant to each test are tested unless specific files are specified.
|
All files relevant to each test are tested unless specific files are specified.
|
||||||
|
|
||||||
|
A full list of tests can be obtained by doing ``ansible-test sanity --list-tests``.
|
||||||
|
|
||||||
ansible-doc
|
ansible-doc
|
||||||
~~~~~~~~~~~
|
~~~~~~~~~~~
|
||||||
|
|
||||||
|
@ -42,7 +47,7 @@ Verifies that ``ansible-doc`` can parse module documentation on all supported py
|
||||||
pep8
|
pep8
|
||||||
~~~~
|
~~~~
|
||||||
|
|
||||||
Python static analysis for PEP 8 style guideline compliance.
|
Python static analysis for PEP 8 style guideline compliance. See :doc:`testing_pep8` for more information.
|
||||||
|
|
||||||
pylint
|
pylint
|
||||||
~~~~~~
|
~~~~~~
|
||||||
|
@ -62,7 +67,7 @@ Static code analysis for shell scripts using the excellent `shellcheck <https://
|
||||||
validate-modules
|
validate-modules
|
||||||
~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Analyze modules for common issues in code and documentation.
|
Analyze modules for common issues in code and documentation. See :doc:`testing_validate-modules` for more information.
|
||||||
|
|
||||||
yamllint
|
yamllint
|
||||||
~~~~~~~~
|
~~~~~~~~
|
93
docs/docsite/rst/dev_guide/testing_units.rst
Normal file
93
docs/docsite/rst/dev_guide/testing_units.rst
Normal file
|
@ -0,0 +1,93 @@
|
||||||
|
**********
|
||||||
|
Unit Tests
|
||||||
|
**********
|
||||||
|
|
||||||
|
Unit tests are small isolated tests that target a specific library or module.
|
||||||
|
|
||||||
|
.. contents:: Topics
|
||||||
|
|
||||||
|
Available Tests
|
||||||
|
===============
|
||||||
|
|
||||||
|
Unit tests can be found in `test/units <https://github.com/ansible/ansible/tree/devel/test/units>`_, notice that the directory structure matches that of ``lib/ansible/``
|
||||||
|
|
||||||
|
Running Tests
|
||||||
|
=============
|
||||||
|
|
||||||
|
Unit tests can be run across the whole code base by doing:
|
||||||
|
|
||||||
|
.. code:: shell
|
||||||
|
|
||||||
|
cd /path/to/ansible/source
|
||||||
|
source hacking/env-setup
|
||||||
|
ansible-test units --tox
|
||||||
|
|
||||||
|
Against a single file by doing:
|
||||||
|
|
||||||
|
.. code:: shell
|
||||||
|
|
||||||
|
ansible-test units --tox apt
|
||||||
|
|
||||||
|
Or against a specific Python version by doing:
|
||||||
|
|
||||||
|
.. code:: shell
|
||||||
|
|
||||||
|
ansible-test units --tox --python 2.7 apt
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
For advanced usage see the online help::
|
||||||
|
|
||||||
|
ansible-test units --help
|
||||||
|
|
||||||
|
|
||||||
|
Installing dependencies
|
||||||
|
=======================
|
||||||
|
|
||||||
|
``ansible-test`` has a number of dependencies , for ``units`` tests we suggest using ``tox``
|
||||||
|
|
||||||
|
The dependencies can be installed using the ``--requirements`` argument. For example:
|
||||||
|
|
||||||
|
.. code:: shell
|
||||||
|
|
||||||
|
ansible-test units --tox --python 2.7 --requirements apt
|
||||||
|
|
||||||
|
|
||||||
|
.. note:: tox version requirement
|
||||||
|
|
||||||
|
When using ``ansible-test`` with ``--tox`` requires tox >= 2.5.0
|
||||||
|
|
||||||
|
|
||||||
|
The full list of requirements can be found at `test/runner/requirements <https://github.com/ansible/ansible/tree/devel/test/runner/requirements>`_. Requirements files are named after their respective commands. See also the `constraints <https://github.com/ansible/ansible/blob/devel/test/runner/requirements/constraints.txt>`_ applicable to all commands.
|
||||||
|
|
||||||
|
|
||||||
|
Extending unit tests
|
||||||
|
====================
|
||||||
|
|
||||||
|
|
||||||
|
.. warning:: What a unit test isn't
|
||||||
|
|
||||||
|
If you start writing a test that requires external services then you may be writing an integration test, rather than a unit test.
|
||||||
|
|
||||||
|
Fixtures files
|
||||||
|
``````````````
|
||||||
|
|
||||||
|
To mock out fetching results from devices, you can use ``fixtures`` to read in pre-generated data.
|
||||||
|
|
||||||
|
Text files live in ``test/units/modules/network/PLATFORM/fixtures/``
|
||||||
|
|
||||||
|
Data is loaded using the ``load_fixture`` method
|
||||||
|
|
||||||
|
See `eos_banner test <https://github.com/ansible/ansible/blob/devel/test/units/modules/network/eos/test_eos_banner.py>`_ for a practical example.
|
||||||
|
|
||||||
|
Code Coverage
|
||||||
|
`````````````
|
||||||
|
|
||||||
|
Most ``ansible-test`` commands allow you to collect code coverage, this is particularly useful when to indicate where to extend testing.
|
||||||
|
|
||||||
|
To collect coverage data add the ``--coverage`` argument to your ``ansible-test`` command line:
|
||||||
|
|
||||||
|
.. code:: shell
|
||||||
|
|
||||||
|
ansible-test units --coverage
|
||||||
|
ansible-test coverage html
|
|
@ -1,21 +1,27 @@
|
||||||
|
****************
|
||||||
validate-modules
|
validate-modules
|
||||||
================
|
****************
|
||||||
|
|
||||||
|
.. contents:: Topics
|
||||||
|
|
||||||
Python program to help test or validate Ansible modules.
|
Python program to help test or validate Ansible modules.
|
||||||
|
|
||||||
|
``validate-modules`` is one of the ``ansible-test`` Sanity Tests, see :doc:`testing_sanity` for more information.
|
||||||
|
|
||||||
Originally developed by Matt Martz (@sivel)
|
Originally developed by Matt Martz (@sivel)
|
||||||
|
|
||||||
|
|
||||||
Usage
|
Usage
|
||||||
~~~~~
|
=====
|
||||||
|
|
||||||
.. code:: shell
|
.. code:: shell
|
||||||
|
|
||||||
cd /path/to/ansible/source
|
cd /path/to/ansible/source
|
||||||
source hacking/env-setup
|
source hacking/env-setup
|
||||||
test/sanity/validate-modules/validate-modules /path/to/modules
|
ansible-test sanity --test validate-modules
|
||||||
|
|
||||||
Help
|
Help
|
||||||
~~~~
|
====
|
||||||
|
|
||||||
.. code:: shell
|
.. code:: shell
|
||||||
|
|
||||||
|
@ -38,11 +44,18 @@ Help
|
||||||
Output format. Default: "plain"
|
Output format. Default: "plain"
|
||||||
--output OUTPUT Output location, use "-" for stdout. Default "-"
|
--output OUTPUT Output location, use "-" for stdout. Default "-"
|
||||||
|
|
||||||
|
|
||||||
|
Extending validate-modules
|
||||||
|
==========================
|
||||||
|
|
||||||
|
The ``validate-modules`` tool has a `schema.py <https://github.com/ansible/ansible/blob/devel/test/sanity/validate-modules/schema.py>`_ that is used to validate the YAML blocks, such as ``DOCUMENTATION`` and ``RETURNS``.
|
||||||
|
|
||||||
|
|
||||||
Codes
|
Codes
|
||||||
~~~~~~~
|
=====
|
||||||
|
|
||||||
Errors
|
Errors
|
||||||
^^^^^^
|
------
|
||||||
|
|
||||||
+---------+--------------------------------------------------------------------------------------------------------------------------------------------+
|
+---------+--------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
| code | sample message |
|
| code | sample message |
|
||||||
|
@ -139,7 +152,7 @@ Errors
|
||||||
+---------+--------------------------------------------------------------------------------------------------------------------------------------------+
|
+---------+--------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
|
|
||||||
Warnings
|
Warnings
|
||||||
^^^^^^^^
|
--------
|
||||||
|
|
||||||
+---------+--------------------------------------------------------------------------------------------------------------------------------------------+
|
+---------+--------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
| code | sample message |
|
| code | sample message |
|
|
@ -7,11 +7,11 @@ Ansible has many modules, but not all of them are maintained by the core project
|
||||||
|
|
||||||
Documentation updates for each module can also be edited directly in the module and by submitting a pull request to the module source code; just look for the "DOCUMENTATION" block in the source tree.
|
Documentation updates for each module can also be edited directly in the module and by submitting a pull request to the module source code; just look for the "DOCUMENTATION" block in the source tree.
|
||||||
|
|
||||||
If you believe you have found a bug in a module and are already running the latest stable or development version of Ansible, first look in the `issue tracker at github.com/ansible/ansible <http://github.com/ansible/ansible/issues>`_ to see if a bug has already been filed. If not, we would be grateful if you would file one.
|
If you believe you have found a bug in a module and are already running the latest stable or development version of Ansible, first look in the `issue tracker at github.com/ansible/ansible <https://github.com/ansible/ansible/issues>`_ to see if a bug has already been filed. If not, we would be grateful if you would file one.
|
||||||
|
|
||||||
Should you have a question rather than a bug report, inquiries are welcome on the `ansible-project google group <https://groups.google.com/forum/#!forum/ansible-project>`_ or on Ansible's "#ansible" channel, located on irc.freenode.net.
|
Should you have a question rather than a bug report, inquiries are welcome on the `ansible-project google group <https://groups.google.com/forum/#!forum/ansible-project>`_ or on Ansible's "#ansible" channel, located on irc.freenode.net.
|
||||||
|
|
||||||
For development-oriented topics, use the `ansible-devel google group <https://groups.google.com/forum/#!forum/ansible-devel>`_ or Ansible's "#ansible" and "#ansible-devel" channels, located on irc.freenode.net. You should also read :doc:`community`, :doc:`dev_guide/developing_test_pr` and :doc:`dev_guide/developing_modules`.
|
For development-oriented topics, use the `ansible-devel google group <https://groups.google.com/forum/#!forum/ansible-devel>`_ or Ansible's ``#ansible`` and ``#ansible-devel`` channels, located on irc.freenode.net. You should also read :doc:`community`, :doc:`dev_guide/testing` and :doc:`dev_guide/developing_modules`.
|
||||||
|
|
||||||
The modules are hosted on GitHub in a subdirectory of the `ansible <https://github.com/ansible/ansible/tree/devel/lib/ansible/modules>`_ repo.
|
The modules are hosted on GitHub in a subdirectory of the `ansible <https://github.com/ansible/ansible/tree/devel/lib/ansible/modules>`_ repo.
|
||||||
|
|
||||||
|
|
2
docs/templates/plugin.rst.j2
vendored
2
docs/templates/plugin.rst.j2
vendored
|
@ -234,5 +234,5 @@ For more information on what this means please read :doc:`modules_support`
|
||||||
{% endif %}
|
{% endif %}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
For help in developing on modules, should you be so inclined, please read :doc:`community`, :doc:`dev_guide/developing_test_pr` and :doc:`dev_guide/developing_modules`.
|
For help in developing on modules, should you be so inclined, please read :doc:`community`, :doc:`dev_guide/testing` and :doc:`dev_guide/developing_modules`.
|
||||||
|
|
||||||
|
|
|
@ -1,13 +0,0 @@
|
||||||
Compile Tests
|
|
||||||
=============
|
|
||||||
|
|
||||||
Compile tests check source files for valid syntax on all supported python versions:
|
|
||||||
|
|
||||||
- 2.6
|
|
||||||
- 2.7
|
|
||||||
- 3.5
|
|
||||||
- 3.6
|
|
||||||
|
|
||||||
Tests are run with ``ansible-test compile``.
|
|
||||||
All versions are tested unless the ``--python`` option is used.
|
|
||||||
All ``*.py`` files are tested unless specific files are specified.
|
|
|
@ -1,223 +0,0 @@
|
||||||
Integration tests
|
|
||||||
=================
|
|
||||||
|
|
||||||
The ansible integration system.
|
|
||||||
|
|
||||||
Tests for playbooks, by playbooks.
|
|
||||||
|
|
||||||
Some tests may require credentials. Credentials may be specified with `credentials.yml`.
|
|
||||||
|
|
||||||
Some tests may require root.
|
|
||||||
|
|
||||||
Quick Start
|
|
||||||
===========
|
|
||||||
|
|
||||||
It is highly recommended that you install and activate the `argcomplete` python package.
|
|
||||||
It provides tab completion in `bash` for the `ansible-test` test runner.
|
|
||||||
|
|
||||||
To get started quickly using Docker containers for testing,
|
|
||||||
see [Tests in Docker containers](#tests-in-docker-containers).
|
|
||||||
|
|
||||||
Configuration
|
|
||||||
=============
|
|
||||||
|
|
||||||
Making your own version of `integration_config.yml` can allow for setting some
|
|
||||||
tunable parameters to help run the tests better in your environment. Some
|
|
||||||
tests (e.g. cloud) will only run when access credentials are provided. For
|
|
||||||
more information about supported credentials, refer to `credentials.template`.
|
|
||||||
|
|
||||||
Prerequisites
|
|
||||||
=============
|
|
||||||
|
|
||||||
The tests will assume things like hg, svn, and git are installed and in path.
|
|
||||||
|
|
||||||
(Complete list pending)
|
|
||||||
|
|
||||||
Non-destructive Tests
|
|
||||||
=====================
|
|
||||||
|
|
||||||
These tests will modify files in subdirectories, but will not do things that install or remove packages or things
|
|
||||||
outside of those test subdirectories. They will also not reconfigure or bounce system services.
|
|
||||||
|
|
||||||
Run as follows for all POSIX platform tests executed by our CI system:
|
|
||||||
|
|
||||||
test/runner/ansible-test integration -v posix/ci/
|
|
||||||
|
|
||||||
You can select specific tests as well, such as for individual modules:
|
|
||||||
|
|
||||||
test/runner/ansible-test integration -v ping
|
|
||||||
|
|
||||||
Destructive Tests
|
|
||||||
=================
|
|
||||||
|
|
||||||
These tests are allowed to install and remove some trivial packages. You will likely want to devote these
|
|
||||||
to a virtual environment. They won't reformat your filesystem, however :)
|
|
||||||
|
|
||||||
test/runner/ansible-test integration -v destructive/
|
|
||||||
|
|
||||||
Cloud Tests
|
|
||||||
===========
|
|
||||||
|
|
||||||
Cloud tests exercise capabilities of cloud modules (e.g. ec2_key). These are
|
|
||||||
not 'tests run in the cloud' so much as tests that leverage the cloud modules
|
|
||||||
and are organized by cloud provider.
|
|
||||||
|
|
||||||
In order to run cloud tests, you must provide access credentials in a file
|
|
||||||
named `credentials.yml`. A sample credentials file named
|
|
||||||
`credentials.template` is available for syntax help.
|
|
||||||
|
|
||||||
|
|
||||||
Provide cloud credentials:
|
|
||||||
|
|
||||||
cp credentials.template credentials.yml
|
|
||||||
${EDITOR:-vi} credentials.yml
|
|
||||||
|
|
||||||
Run the tests:
|
|
||||||
make cloud
|
|
||||||
|
|
||||||
*WARNING* running cloud integration tests will create and destroy cloud
|
|
||||||
resources. Running these tests may result in additional fees associated with
|
|
||||||
your cloud account. Care is taken to ensure that created resources are
|
|
||||||
removed. However, it is advisable to inspect your AWS console to ensure no
|
|
||||||
unexpected resources are running.
|
|
||||||
|
|
||||||
Windows Tests
|
|
||||||
=============
|
|
||||||
|
|
||||||
These tests exercise the winrm connection plugin and Windows modules. You'll
|
|
||||||
need to define an inventory with a remote Windows 2008 or 2012 Server to use
|
|
||||||
for testing, and enable PowerShell Remoting to continue.
|
|
||||||
|
|
||||||
Running these tests may result in changes to your Windows host, so don't run
|
|
||||||
them against a production/critical Windows environment.
|
|
||||||
|
|
||||||
Enable PowerShell Remoting (run on the Windows host via Remote Desktop):
|
|
||||||
Enable-PSRemoting -Force
|
|
||||||
|
|
||||||
Define Windows inventory:
|
|
||||||
|
|
||||||
cp inventory.winrm.template inventory.winrm
|
|
||||||
${EDITOR:-vi} inventory.winrm
|
|
||||||
|
|
||||||
Run the Windows tests executed by our CI system:
|
|
||||||
|
|
||||||
test/runner/ansible-test windows-integration -v windows/ci/
|
|
||||||
|
|
||||||
Tests in Docker containers
|
|
||||||
==========================
|
|
||||||
|
|
||||||
If you have a Linux system with Docker installed, running integration tests using the same Docker containers used by
|
|
||||||
the Ansible continuous integration (CI) system is recommended.
|
|
||||||
|
|
||||||
> Using Docker Engine to run Docker on a non-Linux host is not recommended.
|
|
||||||
> Some tests may fail, depending on the image used for testing.
|
|
||||||
> Using the `--docker-privileged` option may resolve the issue.
|
|
||||||
|
|
||||||
## Running Integration Tests
|
|
||||||
|
|
||||||
To run all CI integration test targets for POSIX platforms in a Ubuntu 16.04 container:
|
|
||||||
|
|
||||||
test/runner/ansible-test integration -v posix/ci/ --docker
|
|
||||||
|
|
||||||
You can also run specific tests or select a different Linux distribution.
|
|
||||||
For example, to run tests for the `ping` module on a Ubuntu 14.04 container:
|
|
||||||
|
|
||||||
test/runner/ansible-test integration -v ping --docker ubuntu1404
|
|
||||||
|
|
||||||
## Container Images
|
|
||||||
|
|
||||||
### Python 2
|
|
||||||
|
|
||||||
Most container images are for testing with Python 2:
|
|
||||||
|
|
||||||
- centos6
|
|
||||||
- centos7
|
|
||||||
- fedora24
|
|
||||||
- fedora25
|
|
||||||
- opensuse42.1
|
|
||||||
- opensuse42.2
|
|
||||||
- ubuntu1204
|
|
||||||
- ubuntu1404
|
|
||||||
- ubuntu1604
|
|
||||||
|
|
||||||
### Python 3
|
|
||||||
|
|
||||||
To test with Python 3 use the following images:
|
|
||||||
|
|
||||||
- ubuntu1604py3
|
|
||||||
|
|
||||||
Network Tests
|
|
||||||
=============
|
|
||||||
**Note:** From Ansible 2.3, for any new Network Module to be accepted it must be accompanied by a corresponding test.
|
|
||||||
|
|
||||||
For further help with this please contact `gundalow` in `#ansible-devel` on FreeNode IRC.
|
|
||||||
|
|
||||||
```
|
|
||||||
$ ANSIBLE_ROLES_PATH=targets ansible-playbook network-all.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
*NOTE* To run the network tests you will need a number of test machines and sutabily configured inventory file, a sample is included in `test/integration/inventory.network`
|
|
||||||
|
|
||||||
*NOTE* As with the rest of the integration tests, they can be found grouped by module in `test/integration/targets/MODULENAME/`
|
|
||||||
|
|
||||||
To filter a set of test cases set `limit_to` to the name of the group, generally this is the name of the module:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ ANSIBLE_ROLES_PATH=targets ansible-playbook -i inventory.network network-all.yaml -e "limit_to=eos_command"
|
|
||||||
```
|
|
||||||
|
|
||||||
To filter a singular test case set the tags options to eapi or cli, set limit_to to the test group,
|
|
||||||
and test_cases to the name of the test:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ ANSIBLE_ROLES_PATH=targets ansible-playbook -i inventory.network network-all.yaml --tags="cli" -e "limit_to=eos_command test_case=notequal"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Contributing Test Cases
|
|
||||||
|
|
||||||
Test cases are added to roles based on the module being testing. Test cases
|
|
||||||
should include both `cli` and `eapi` test cases. Cli test cases should be
|
|
||||||
added to `test/integration/targets/modulename/tests/cli` and eapi tests should be added to
|
|
||||||
`test/integration/targets/modulename/tests/eapi`.
|
|
||||||
|
|
||||||
In addition to positive testing, negative tests are required to ensure user friendly warnings & errors are generated, rather than backtraces, for example:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
- name: test invalid subset (foobar)
|
|
||||||
eos_facts:
|
|
||||||
provider: "{{ cli }}"
|
|
||||||
gather_subset:
|
|
||||||
- "foobar"
|
|
||||||
register: result
|
|
||||||
ignore_errors: true
|
|
||||||
|
|
||||||
- assert:
|
|
||||||
that:
|
|
||||||
# Failures shouldn't return changes
|
|
||||||
- "result.changed == false"
|
|
||||||
# It's a failure
|
|
||||||
- "result.failed == true"
|
|
||||||
# Sensible Failure message
|
|
||||||
- "'Subset must be one of' in result.msg"
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
### Conventions
|
|
||||||
|
|
||||||
- Each test case should generally follow the pattern:
|
|
||||||
|
|
||||||
>setup —> test —> assert —> test again (idempotent) —> assert —> -teardown (if needed) -> done
|
|
||||||
|
|
||||||
This keeps test playbooks from becoming monolithic and difficult to
|
|
||||||
troubleshoot.
|
|
||||||
|
|
||||||
- Include a name for each task that is not an assertion. (It's OK to add names
|
|
||||||
to assertions too. But to make it easy to identify the broken task within a failed
|
|
||||||
test, at least provide a helpful name for each task.)
|
|
||||||
|
|
||||||
- Files containing test cases must end in `.yaml`
|
|
||||||
|
|
||||||
|
|
||||||
### Adding a new Network Platform
|
|
||||||
|
|
||||||
A top level playbook is required such as `ansible/test/integration/eos.yaml` which needs to be references by `ansible/test/integration/network-all.yaml`
|
|
|
@ -1,33 +0,0 @@
|
||||||
# PEP 8
|
|
||||||
|
|
||||||
[PEP 8](https://www.python.org/dev/peps/pep-0008/) style guidelines are enforced by
|
|
||||||
[pep8](https://pypi.python.org/pypi/pep8) on all python files in the repository by default.
|
|
||||||
|
|
||||||
## Current Rule Set
|
|
||||||
|
|
||||||
By default all files are tested using the current rule set.
|
|
||||||
All `pep8` tests are executed, except those listed in the [current ignore list](current-ignore.txt).
|
|
||||||
|
|
||||||
## Legacy Rule Set
|
|
||||||
|
|
||||||
Files which are listed in the [legacy file list](legacy-files.txt) are tested using the legacy rule set.
|
|
||||||
All `pep8` tests are executed, except those listed in the [current ignore list](current-ignore.txt) or
|
|
||||||
[legacy ignore list](legacy-ignore.txt).
|
|
||||||
|
|
||||||
> Files listed in the legacy file list which pass the current rule set will result in an error.
|
|
||||||
> This is intended to prevent regressions on style guidelines for files which pass the more stringent current rule set.
|
|
||||||
|
|
||||||
## Skipping Tests
|
|
||||||
|
|
||||||
Files listed in the [skip list](skip.txt) are not tested by `pep8`.
|
|
||||||
|
|
||||||
## Removed Files
|
|
||||||
|
|
||||||
Files which have been removed from the repository must be removed from the legacy file list and the skip list.
|
|
||||||
|
|
||||||
## Running Locally
|
|
||||||
|
|
||||||
The pep8 check can be run locally with:
|
|
||||||
|
|
||||||
./test/runner/ansible-test sanity --test pep8 [file-or-directory-path-to-check] ...
|
|
||||||
|
|
|
@ -1,50 +0,0 @@
|
||||||
httptester
|
|
||||||
==========
|
|
||||||
|
|
||||||
HTTP Testing endpoint which provides httpbin, nginx, SSL and SNI
|
|
||||||
capabilities, for providing a local HTTP endpoint for testing
|
|
||||||
|
|
||||||
Building
|
|
||||||
--------
|
|
||||||
|
|
||||||
Docker
|
|
||||||
~~~~~~
|
|
||||||
|
|
||||||
Both ways of building docker utilize the ``nginx:alpine`` image, but can
|
|
||||||
be customized for ``Fedora``, ``Red Hat``, ``CentOS``, ``Ubuntu``,
|
|
||||||
``Debian`` and other variants of ``Alpine``
|
|
||||||
|
|
||||||
When utilizing ``packer`` or configuring with ``ansible-playbook``
|
|
||||||
the services will not automtically start on launch, and will have to be
|
|
||||||
manually started using::
|
|
||||||
|
|
||||||
$ /services.sh
|
|
||||||
|
|
||||||
Such as when starting a docker container::
|
|
||||||
|
|
||||||
docker run -ti --rm -p 80:80 -p 443:443 --name httptester ansible/ansible:httptester /services.sh
|
|
||||||
|
|
||||||
docker build
|
|
||||||
^^^^^^^^^^^^
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
docker build -t ansible/ansible:httptester .
|
|
||||||
|
|
||||||
packer
|
|
||||||
^^^^^^
|
|
||||||
|
|
||||||
The packer build will use ``ansible-playbook`` to perform the
|
|
||||||
configuration, and will tag the image as ``ansible/ansible:httptester``
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
packer build packer.json
|
|
||||||
|
|
||||||
Ansible
|
|
||||||
~~~~~~~
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
ansible-playbook -i hosts -v httptester.yml
|
|
||||||
|
|
Loading…
Reference in a new issue