1
0
Fork 0
mirror of https://github.com/ansible-collections/community.general.git synced 2024-09-14 20:13:21 +02:00

Minor testing docs improvements (#24103)

Fix a few formatting issues spotted post review.
Also reapply missing commit
This commit is contained in:
John R Barker 2017-04-28 11:58:38 +01:00 committed by GitHub
parent ecbf8e933a
commit 8733253a76
6 changed files with 42 additions and 120 deletions

View file

@ -33,9 +33,6 @@ At a high level we have the following classifications of tests:
* Tests directly against individual parts of the code base. * Tests directly against individual parts of the code base.
Link to Manual testing of PRs (testing_pullrequests.rst)
If you're a developer, one of the most valuable things you can do is look at the GitHub issues list and help fix bugs. We almost always prioritize bug fixing over feature development, so helping to fix bugs is one of the best things you can do. If you're a developer, one of the most valuable things you can do is look at the GitHub issues list and help fix bugs. We almost always prioritize bug fixing over feature development, so helping to fix bugs is one of the best things you can do.
Even if you're not a developer, helping to test pull requests for bug fixes and features is still immensely valuable. Even if you're not a developer, helping to test pull requests for bug fixes and features is still immensely valuable.
@ -62,8 +59,6 @@ When Shippable detects an error and it can be linked back to a file that has bee
lib/ansible/modules/network/foo/bar.py:0:0: E316 ANSIBLE_METADATA.metadata_version: required key not provided @ data['metadata_version']. Got None lib/ansible/modules/network/foo/bar.py:0:0: E316 ANSIBLE_METADATA.metadata_version: required key not provided @ data['metadata_version']. Got None
From the above example we can see that ``--test pep8`` and ``--test validate-modules`` have identified issues. The commands given allow you to run the same tests locally to ensure you've fixed the issues without having to push your changed to GitHub and wait for Shippable, for example: From the above example we can see that ``--test pep8`` and ``--test validate-modules`` have identified issues. The commands given allow you to run the same tests locally to ensure you've fixed the issues without having to push your changed to GitHub and wait for Shippable, for example:
TBD
If you haven't already got Ansible available, use the local checkout by running:: If you haven't already got Ansible available, use the local checkout by running::
@ -137,7 +132,8 @@ and destination repositories. It will look something like this::
Someuser wants to merge 1 commit into ansible:devel from someuser:feature_branch_name Someuser wants to merge 1 commit into ansible:devel from someuser:feature_branch_name
.. note:: Only test ``ansible:devel`` .. note:: Only test ``ansible:devel``
It is important that the PR request target be ansible:devel, as we do not accept pull requests into any other branch. Dot releases are cherry-picked manually by Ansible staff.
It is important that the PR request target be ``ansible:devel``, as we do not accept pull requests into any other branch. Dot releases are cherry-picked manually by Ansible staff.
The username and branch at the end are the important parts, which will be turned into git commands as follows:: The username and branch at the end are the important parts, which will be turned into git commands as follows::
@ -187,9 +183,9 @@ If the PR does not resolve the issue, or if you see any failures from the unit/i
| When I ran this Ubuntu 16.04 it failed with the following: | When I ran this Ubuntu 16.04 it failed with the following:
| |
| \``` | \```
| BLARG | some output
| StrackTrace | StrackTrace
| RRRARRGGG | some other output
| \``` | \```
Want to know more about testing? Want to know more about testing?

View file

@ -38,21 +38,19 @@ Or against a specific Python version by doing:
ansible-test compile --python 2.7 lineinfile ansible-test compile --python 2.7 lineinfile
For advanced usage see the online help: For advanced usage see the help:
.. code:: shell .. code:: shell
ansible-test units --help ansible-test units --help
For advanced options see ``ansible-test compile --help``
Installing dependencies Installing dependencies
======================= =======================
``ansible-test`` has a number of dependencies , for ``compile`` tests we suggest running the tests with ``--local``, which is the default ``ansible-test`` has a number of dependencies , for ``compile`` tests we suggest running the tests with ``--local``, which is the default
The dependencies can be installed using the ``-requirements`` argument. For example: The dependencies can be installed using the ``--requirements`` argument. For example:
.. code:: shell .. code:: shell

View file

@ -18,9 +18,6 @@ Quick Start
It is highly recommended that you install and activate the ``argcomplete`` python package. It is highly recommended that you install and activate the ``argcomplete`` python package.
It provides tab completion in ``bash`` for the ``ansible-test`` test runner. It provides tab completion in ``bash`` for the ``ansible-test`` test runner.
To get started quickly using Docker containers for testing,
see [Tests in Docker containers](#tests-in-docker-containers).
Configuration Configuration
============= =============
@ -80,7 +77,8 @@ for testing, and enable PowerShell Remoting to continue.
Running these tests may result in changes to your Windows host, so don't run Running these tests may result in changes to your Windows host, so don't run
them against a production/critical Windows environment. them against a production/critical Windows environment.
Enable PowerShell Remoting (run on the Windows host via Remote Desktop): Enable PowerShell Remoting (run on the Windows host via Remote Desktop)::
Enable-PSRemoting -Force Enable-PSRemoting -Force
Define Windows inventory:: Define Windows inventory::
@ -141,6 +139,12 @@ To test with Python 3 use the following images:
- ubuntu1604py3 - ubuntu1604py3
Cloud Tests
===========
See the :doc:`testing_integration_legacy` page for more information.
Network Tests Network Tests
============= =============
@ -183,9 +187,9 @@ Writing network integration tests
--------------------------------- ---------------------------------
Test cases are added to roles based on the module being testing. Test cases Test cases are added to roles based on the module being testing. Test cases
should include both `cli` and `eapi` test cases. Cli test cases should be should include both cli and API test cases. Cli test cases should be
added to `test/integration/targets/modulename/tests/cli` and eapi tests should be added to added to ``test/integration/targets/modulename/tests/cli`` and API tests should be added to
`test/integration/targets/modulename/tests/eapi`. ``test/integration/targets/modulename/tests/eapi``, or ``nxapi``.
In addition to positive testing, negative tests are required to ensure user friendly warnings & errors are generated, rather than backtraces, for example: In addition to positive testing, negative tests are required to ensure user friendly warnings & errors are generated, rather than backtraces, for example:
@ -229,7 +233,9 @@ Conventions
Adding a new Network Platform Adding a new Network Platform
````````````````````````````` `````````````````````````````
A top level playbook is required such as `ansible/test/integration/eos.yaml` which needs to be references by `ansible/test/integration/network-all.yaml` A top level playbook is required such as ``ansible/test/integration/eos.yaml`` which needs to be references by ``ansible/test/integration/network-all.yaml``
Where to find out more Where to find out more
====================== ======================
If you'd like to know more about the plans for improving testing Ansible then why not join the `Testing Working Group <https://github.com/ansible/community/blob/master/MEETINGS.md>`_.

View file

@ -29,7 +29,7 @@ Cloud tests exercise capabilities of cloud modules (e.g. ec2_key). These are
not 'tests run in the cloud' so much as tests that leverage the cloud modules not 'tests run in the cloud' so much as tests that leverage the cloud modules
and are organized by cloud provider. and are organized by cloud provider.
Some AWS tests may use environment variables. It is recommended to either unset any AWS environment variables( such as ``AWS_DEFAULT_PROFILE``, ``AWS_SECRET_ACCESS_KEY``, etc) or be sure that the environment variables match the credentials provided in ``credentials.yml`` to ensure the tests run with consistency to their full capability on the expected account . See `AWS CLI docs <http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html>`_ for information on creating a profile. Some AWS tests may use environment variables. It is recommended to either unset any AWS environment variables( such as ``AWS_DEFAULT_PROFILE``, ``AWS_SECRET_ACCESS_KEY``, etc) or be sure that the environment variables match the credentials provided in ``credentials.yml`` to ensure the tests run with consistency to their full capability on the expected account. See `AWS CLI docs <http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html>`_ for information on creating a profile.
Subsets of tests may be run by ``#commenting`` out unnecessary roles in the appropriate playbook, such as ``test/integration/amazon.yml``. Subsets of tests may be run by ``#commenting`` out unnecessary roles in the appropriate playbook, such as ``test/integration/amazon.yml``.
@ -46,6 +46,7 @@ Provide cloud credentials::
Other configuration Other configuration
=================== ===================
In order to run some tests, you must provide access credentials in a file In order to run some tests, you must provide access credentials in a file
named ``credentials.yml``. A sample credentials file named named ``credentials.yml``. A sample credentials file named
``credentials.template`` is available for syntax help. ``credentials.template`` is available for syntax help.
@ -53,12 +54,14 @@ named ``credentials.yml``. A sample credentials file named
Running Tests Running Tests
============= =============
The tests are invoked via a ``Makefile``:: The tests are invoked via a ``Makefile``.
# If you haven't already got Ansible available use the local checkout by doing:: If you haven't already got Ansible available use the local checkout by doing::
source hacking/env-setup source hacking/env-setup
Run the tests by doing::
cd test/integration/ cd test/integration/
# TARGET is the name of the test from the list at the top of this page # TARGET is the name of the test from the list at the top of this page
#make TARGET #make TARGET

View file

@ -4,7 +4,7 @@ Testing Ansible
.. contents:: Topics .. contents:: Topics
This page describes how to: This document describes how to:
* Run tests locally using ``ansible-test`` * Run tests locally using ``ansible-test``
* Extend * Extend
@ -16,71 +16,12 @@ There are no special requirements for running ``ansible-test`` on Python 2.7 or
The ``argparse`` package is required for Python 2.6. The ``argparse`` package is required for Python 2.6.
The requirements for each ``ansible-test`` command are covered later. The requirements for each ``ansible-test`` command are covered later.
Setup
=====
The code and tests are in the same GitHub repository, to get a local copy do
#. Fork the `ansible/ansible <https://github.com/ansible/ansible/>`_ repository on GitHub.
#. Clone your fork: ``git clone git@github.com:USERNAME/ansible.git``
#. Install the optional ``argcomplete`` package for tab completion (highly recommended)::
pip install argcomplete
activate-global-python-argcomplete
# Restart your shell to complete global activation.
#. Configure your environment to run from your clone (once per shell): ``. hacking/env-setup``
#. ``ansible``, ``ansible-playbook`` and ``ansible-test`` will now be in your ``PATH``
Test Environments Test Environments
================= =================
Most ``ansible-test`` commands support running in one or more isolated test environments to simplify testing. Most ``ansible-test`` commands support running in one or more isolated test environments to simplify testing.
Local
-----
The ``--local`` option runs tests locally without the use of an isolated test environment.
This is the default behavior.
Recommended for ``compile`` tests.
See the `command requirements directory <runner/requirements/>`_ for the requirements for each ``ansible-test`` command.
Requirements files are named after their respective commands.
See also the `constraints <runner/requirements/constraints.txt>`_ applicable to all commands.
Use the ``--requirements`` option to automatically install ``pip`` requirements relevant to the command being used.
Docker
------
The ``--docker`` option runs tests in a docker container.
Recommended for ``integration`` tests.
This option accepts an optional docker container image.
See the `list of supported docker images <runner/completion/docker.txt>`_ for options.
Use the ``--docker-no-pull`` option to avoid pulling the latest container image.
This is required when using custom local images that are not available for download.
Tox
---
The ``--tox`` option run tests in a ``tox`` managed Python virtual environment.
Recommended for ``windows-integration`` and ``units`` tests.
The following Python versions are supported:
* 2.6
* 2.7
* 3.5
* 3.6
By default, test commands will run against all supported Python versions when using ``tox``.
Use the ``--python`` option to specify a single Python version to use for test commands.
Remote Remote
------ ------
@ -92,40 +33,6 @@ An API key is required to use this feature.
See the `list of supported platforms and versions <https://github.com/ansible/ansible/blob/devel/test/runner/completion/remote.txt>`_ for additional details. See the `list of supported platforms and versions <https://github.com/ansible/ansible/blob/devel/test/runner/completion/remote.txt>`_ for additional details.
General Usage
=============
Tests are run with the ``ansible-test`` command.
Consult ``ansible-test --help`` for usage information not covered here.
Use the ``--explain`` option to see what commands will be executed without actually running them.
Running Tests
=============
There are four main categories of tests, each in their own directory.
* `compile <compile/>`_ - Python syntax checking for supported versions. Examples:
* ``ansible-test compile`` - Check syntax for all supported versions.
* ``ansible-test compile --python 3.5`` - Check only Python 3.5 syntax.
* `sanity <sanity/>`_ - Static code analysis and general purpose script-based tests. Examples:
* ``ansible-test sanity --tox --python 2.7`` - Run all sanity tests on Python 2.7 using ``tox``.
* ``ansible-test sanity --test pep8`` - Run the ``pep8`` test without ``tox``.
* `integration <integration/>`_ - Playbook based tests for modules and core engine functionality. Examples:
* ``ansible-test integration ping --docker`` - Run the ``ping`` module test using ``docker``.
* ``ansible-test windows-integration windows/ci/`` - Run all Windows tests covered by CI.
* `units <units/>`_ - API oriented tests using mock interfaces for modules and core engine functionality. Examples:
* ``ansible-test units --tox`` - Run all unit tests on all supported Python versions using ``tox``.
* ``ansible-test units --tox --python 2.7 test/units/vars/`` - Run specific tests on Python 2.7 using ``tox``.
Consult each of the test directories for additional details on usage and requirements.
Interactive Shell Interactive Shell
================= =================

View file

@ -82,12 +82,24 @@ See `eos_banner test <https://github.com/ansible/ansible/blob/devel/test/units/
Code Coverage Code Coverage
````````````` `````````````
Most ``ansible-test`` commands allow you to collect code coverage, this is particularly useful when to indicate where to extend testing. Most ``ansible-test`` commands allow you to collect code coverage, this is particularly useful when to indicate where to extend testing.
To collect coverage data add the ``--coverage`` argument to your ``ansible-test`` command line: To collect coverage data add the ``--coverage`` argument to your ``ansible-test`` command line:
.. code:: shell .. code:: shell
ansible-test units --coverage ansible-test units --coverage apt
ansible-test coverage html ansible-test coverage html
Results will be written to ``test/results/reports/coverage/index.html``
Reports can be generated in several different formats:
* ``ansible-test coverage report`` - Console report.
* ``ansible-test coverage html`` - HTML report.
* ``ansible-test coverage xml`` - XML report.
To clear data between test runs, use the ``ansible-test coverage erase`` command. For a full list of features see the online help::
ansible-test coverage --help