10. pytest -- skip and xfail marks

Keywords: Python Windows Session Attribute

Catalog

Previous index: https://www.cnblogs.com/luizyao/p/11771740.html

In practice, the execution of test cases may depend on some external conditions, such as running on a specific operating system (Windows), or we expect them to fail, such as being blocked by a known Bug; if we can mark these cases in advance, pytest will preprocess them accordingly and provide a more accurate Accurate test report;

In this scenario, common tags are:

  • skip: only when some conditions are met, the test case will be executed, otherwise, the execution of the whole test case will be skipped; for example, on a non Windows platform, the use case that only supports Windows system will be skipped;
  • xfail: for a specific reason, we know that this application will fail; for example, testing an unimplemented function or blocking a known Bug;

pytest does not display the details of skip and xfail cases by default. We can define this behavior through the - r option;

Generally, we use a letter as a representative of a type. The specific rules are as follows:

(f)ailed, (E)rror, (s)kipped, (x)failed, (X)passed, (p)assed, (P)assed with output, (a)ll except passed(p/P), or (A)ll

For example, use cases that show XFAIL, XPASS, and SKIPPED results:

pytest -rxXs

More details can be found in: 2. Use and call - Summary Report

1. Skip execution of test cases

1.1. @pytest.mark.skip decorator

The simplest way to skip execution of a use case is to use the @ pytest.mark.skip decorator, and you can set an optional parameter reason to indicate the reason for skipping;

@pytest.mark.skip(reason="no way of currently testing this")
def test_the_unknown():
    ...

1.2. pytest.skip method

If we want to force skipping the next steps during test execution (or during SetUp/TearDown), we can consider pytest.skip(), which can also set a parameter msg to indicate the reason for skipping;

def test_function():
    if not valid_config():
        pytest.skip("unsupported configuration")

In addition, we can set up a Boolean parameter allow_module_level (the default is False) to indicate whether it is allowed to call this method in the module, and if it is True, then the remaining part of the module is skipped.

For example, on the Windows platform, do not test this module:

import sys
import pytest

if not sys.platform.startswith("win"):
    pytest.skip("skipping windows-only tests", allow_module_level=True)

Be careful:

When the allow ﹣ module ﹣ level parameter is set in the use case, it does not take effect;

def test_one():
    pytest.skip("jump out", allow? Module? Level = true)


def test_two():
    assert 1

That is, in the above example, the test two use case is not skipped;

1.3. @pytest.mark.skipif decorator

If we want to skip the execution of some test cases conditionally, we can use @ pytest.mark.skipif decorator;

For example, when the python version is less than 3.6, skip the use case:

import sys


@pytest.mark.skipif(sys.version_info < (3, 6), reason="requires python3.6 or higher")
def test_function():
    ...

We can also share the pytest.mark.skipif tag between the two modules;

For example, we defined minversion in test_module.py, indicating that when the minimum supported version of python is:

# src/chapter-10/test_module.py

import sys

import pytest

minversion = pytest.mark.skipif(sys.version_info < (3, 8),
                                reason='Please use python 3.8 Or later.')


@minversion
def test_one():
    assert True

In addition, minversion is introduced in test other module.py:

# src/chapter-10/test_other_module.py

from test_module import minversion


@minversion
def test_two():
    assert True

Now, let's execute these two use cases (the current python version of the virtual environment is 3.7.3):

λ pipenv run pytest -rs -k 'module' src/chapter-10/
================================ test session starts ================================= 
platform win32 -- Python 3.7.3, pytest-5.1.3, py-1.8.0, pluggy-0.13.0
rootdir: D:\Personal Files\Projects\pytest-chinese-doc
collected 2 items

src\chapter-10\test_module.py s                                                 [ 50%] 
src\chapter-10\test_other_module.py s                                           [100%]

============================== short test summary info =============================== 
SKIPPED [1] src\chapter-10\test_module.py:29: Please use python 3.8 Or later.
SKIPPED [1] src\chapter-10\test_other_module.py:26: Please use python 3.8 Or later.
================================= 2 skipped in 0.03s =================================

As you can see, minversion works in both test modules;

Therefore, in a large-scale test project, all execution conditions can be defined in one file and then introduced into the module when necessary;

In addition, it should be noted that when a use case specifies multiple skipif conditions, only one of them needs to be satisfied to skip the execution of the use case;

Note: there is no method of pytest.skipif();

1.4. pytest.importorskip method

When the introduction of a module fails, we can also skip the execution of subsequent parts;

docutils = pytest.importorskip("docutils")

We can also specify a version that meets the minimum requirements. The judgment is based on checking the "version" attribute of the imported module:

docutils = pytest.importorskip("docutils", minversion="0.3") 

We can also specify a reason parameter to indicate the reason for skipping;

We have noticed that pytest.importorskip and pytest.skip (allow? Module? Level = true) can skip the rest in the module introduction phase; in fact, they throw the same exception in the source code:

# pytest.skip(allow_module_level=True)

raise Skipped(msg=msg, allow_module_level=allow_module_level)
# pytest.importorskip()

raise Skipped(reason, allow_module_level=True) from None

Only importorskip adds minversion parameter additionally:

# _pytest/outcomes.py
 
if minversion is None:
        return mod
    verattr = getattr(mod, "__version__", None)
    if minversion is not None:
        if verattr is None or Version(verattr) < Version(minversion):
            raise Skipped(
                "module %r has __version__ %r, required is: %r"
                % (modname, verattr, minversion),
                allow_module_level=True,
            )

We also confirm that what it actually checks is the "version" attribute of the module;

Therefore, for general scenarios, the same effect can be achieved by using the following methods:

try:
    import docutils
except ImportError:
    pytest.skip("could not import 'docutils': No module named 'docutils'",
                allow_module_level=True)

1.5. Skip test class

Apply @ pytest.mark.skip or @ pytest.mark.skipif to the class:

# src/chapter-10/test_skip_class.py

import pytest


@pytest.mark.skip("Acts on every use case in a class, so pytest Two collected SKIPPED Use cases.")
class TestMyClass():
    def test_one(self):
        assert True

    def test_two(self):
        assert True

1.6. Skip test module

Define the pytestmark variable in the module (recommended):

# src/chapter-10/test_skip_module.py

import pytest

pytestmark = pytest.mark.skip('It acts on every use case in the module, so pytest Two collected SKIPPED Use cases.')


def test_one():
    assert True


def test_two():
    assert True

Alternatively, call the pytest.skip method in the module and set allow_module_level=True:

# src/chapter-10/test_skip_module.py

import pytest

pytest.skip('In the use case collection phase, you have already jumped out, so no use cases will be collected.', allow_module_level=True)


def test_one():
    assert True


def test_two():
    assert True

1.7. Skip specified file or directory

By configuring the collect ignore glob item in conftest.py, you can skip the specified files and directories in the collection phase of use cases;

For example, skip files in the current test directory whose file names match the test *. Py rule and files in the sub folder sub of config:

collect_ignore_glob = ['test*.py', 'config/sub']

More details can be found in: https://docs.pytest.org/en/5.1.3/example/pythoncollection.html#customizing-test-collection

1.8. summary

pytest.mark.skip pytest.mark.skipif pytest.skip pytest.importorskip conftest.py
Use case @pytest.mark.skip() @pytest.mark.skipif() pytest.skip(msg='') / /
class @pytest.mark.skip() @pytest.mark.skipif() / / /
Modular pytestmark = pytest.mark.skip() pytestmark = pytest.mark.skipif() pytest.skip(allow_module_level=True) pytestmark = pytest.importorskip() /
File or directory / / / / collect_ignore_glob

2. Mark the use case as the expected failure

We can mark the use case with @ pytest.mark.xfail to indicate that we expect the use case to fail;

The use case will execute normally, but the stack information will not be displayed in case of failure. The final results are as follows: when the use case fails to execute (XFAIL: failure conforming to expectations) and when the use case succeeds to execute (XPASS: success not conforming to expectations)

In addition, we can directly mark the result of the use case as XFAIL during the execution of the use case through pytest.xfail method, and skip the rest:

def test_function():
    if not valid_config():
        pytest.xfail("failing configuration (but should work)")

You can also specify a reason parameter for pytest.xfail to indicate the reason;

Let's focus on the usage of @ pytest.mark.xfail:

  • condition position parameter, default value is None

    Like @ pytest.mark.skipif, it can also receive a python expression, indicating that the use case can only be marked when the conditions are met;

    For example, only use cases are marked above pytest version 3.6:

    @pytest.mark.xfail(sys.version_info >= (3, 6), reason="python3.6 api changes")
    def test_function():
        ...
  • reason keyword parameter, default value is None

    You can specify a string indicating the reason for marking the use case;

  • strict keyword parameter, default value is False

    When strict=False, if the use case fails to execute, the result is marked as XFAIL, which indicates the failure that meets the expectation; if the use case succeeds, the result is marked as XPASS, which indicates the success that does not meet the expectation;

    When strict=True, if the use case is executed successfully, the result will be marked as FAILED instead of XPASS;

    We can also configure it in pytest.ini file:

    [pytest]
    xfail_strict=true
  • raises keyword parameter, default value is None

    It can be specified as a tuple of one or more exception classes, indicating that we expect the use case to report the specified exception;

    If the failure of the use case is not caused by the expected exception, pytest will mark the test result as FAILED;

  • run keyword parameter, default value is True:

    When run=False, pytest will no longer execute the test case and directly mark the result as XFAIL;

The following table summarizes the impact of different parameter combinations on the test results (where xfail = pytest.mark.xfail):

@xfail() @xfail(strict=True) @xfail(raises=IndexError) @xfail(strict=True, raises=IndexError) @xfail(..., run=False)
Case test successful XPASS FAILED XPASS FAILED XFAIL
Case test failed, report AssertionError XFAIL XFAIL FAILED FAILED XFAIL
Use case escalation IndexError XFAIL XFAIL XFAIL XFAIL XFAIL

2.1. Disable xfail

We can use the command-line option pytest --runxfail to enable the xfail flag, so that these use cases become normal execution use cases, as if they have not been marked:

Similarly, the pytest.xfail() method will fail;

3. Combine pytest.param method

pytest.param method can be used to specify a specific argument for @ pytest.mark.parameterize or parameterized fixture. It has a keyword parameter marks, which can receive one or a group of marks, which is used to mark the use cases of this round of tests;

Let's illustrate with the following example:

# src/chapter-10/test_params.py

import pytest
import sys


@pytest.mark.parametrize(
    ('n', 'expected'),
    [(2, 1),
     pytest.param(2, 1, marks=pytest.mark.xfail(), id='XPASS'),
     pytest.param(0, 1, marks=pytest.mark.xfail(raises=ZeroDivisionError), id='XFAIL'),
     pytest.param(1, 2, marks=pytest.mark.skip(reason='Invalid parameter, skipping execution')),
     pytest.param(1, 2, marks=pytest.mark.skipif(sys.version_info <= (3, 8), reason='Please use 3.8 And above python. '))])
def test_params(n, expected):
    assert 2 / n == expected

Implementation:

λ pipenv run pytest -rA src/chapter-10/test_params.py
================================ test session starts ================================= 
platform win32 -- Python 3.7.3, pytest-5.1.3, py-1.8.0, pluggy-0.13.0
rootdir: D:\Personal Files\Projects\pytest-chinese-doc
collected 5 items

src\chapter-10\test_params.py .Xxss                                             [100%]

======================================= PASSES ======================================= 
============================== short test summary info =============================== 
PASSED src/chapter-10/test_params.py::test_params[2-1]
SKIPPED [1] src\chapter-10\test_params.py:26: Invalid parameter, skipping execution
SKIPPED [1] src\chapter-10\test_params.py:26: Please use 3.8 And above python. 
XFAIL src/chapter-10/test_params.py::test_params[XFAIL]
XPASS src/chapter-10/test_params.py::test_params[XPASS]
================= 1 passed, 2 skipped, 1 xfailed, 1 xpassed in 0.08s =================

For details of parameterized fixture, please refer to: 4. fixtures: explicit, modular and extensible -- mark use cases in parameterized fixture s

Posted by dabigchz on Wed, 06 Nov 2019 00:43:01 -0800