2. pytest Chinese Document - Use and Call

Keywords: Python xml Session Attribute

Catalog

Use and invocation

Call pytest through python-m pytest

You can perform tests with python's interpreter:

python -m pytest [...]

However, this is almost identical to the direct execution of the pytest [...] command.

The status code returned at the end of pytest execution

When the pytest command is executed, the following six status codes may be returned:

  • 0 (OK): All collected use case tests passed
  • 1 (TESTS_FAILED): Failure of use case testing
  • 2 (INTERRUPTED): Users interrupt test execution
  • 3 (INTERNAL_ERROR): Internal errors occur during test execution
  • 4 (USAGE_ERROR): Error with pytest command
  • 5 (NO_TESTS_COLLECTED): No test cases were collected

They are enumerating classes _pytest.main.ExitCode It is stated in the statement. Moreover, as part of the public API, it can be directly introduced and accessed.

from pytest import ExitCode

Get help information

pytest --version  # View version number and pytest import path
pytest -h  # View Help Information

Maximum number of test cases allowed to fail

When the maximum limit is reached, the execution is withdrawn; if not configured, there is no upper limit;

pytest -x  # Exit execution when the first failure occurs
pytest --maxfail==2  # Exit execution when encountering a second failure

Execute the specified test case

pytest supports a variety of ways to execute specific test cases:

Execute test cases in the specified module:

pytest test_mod.py

Execute test cases in the specified directory:

pytest testing/

Execute test cases with specific keywords in file name, class name, or function name:

yaomengdeMacBook-Air:src yaomeng$ pytest -k "_class and not two"
============================================================ test session starts =============================================================
platform darwin -- Python 3.7.3, pytest-5.1.2, py-1.8.0, pluggy-0.12.0
rootdir: /Users/yaomeng/Private/Projects/pytest-chinese-doc/src
plugins: repeat-0.8.0
collected 5 items / 4 deselected / 1 selected                                                                                                

test_class.py .                                                                                                                        [100%]

====================================================== 1 passed, 4 deselected in 0.03s =======================================================

In the execution src / directory, the test case with the name containing _class but not two is executed in the test_class.py module.

Note: The python keyword cannot be used in the - k option, such as class, def, etc.

Execute the test case for the specified nodeid:

pytest specifies a unique nodeid for each collected test case. It is composed of module name and descriptor, in the middle with:: interval. The descriptor can be a class name, a function name, and a parameter given by a parametrize tag.

# test_nodeid.py

import pytest


def test_one():
    print('test_one')
    assert 1


class TestNodeId:
    def test_one(self):
        print('TestNodeId::test_one')
        assert 1

    @pytest.mark.parametrize('x,y', [(1, 1), (3, 4)])
    def test_two(self, x, y):
        print(f'TestNodeId::test_two::{x} == {y}')
        assert x == y

Execute the test_one use case:

/d/Personal Files/Python/pytest-chinese-doc/src (5.1.2)
λ pytest -s test_nodeid.py::test_one
================================================= test session starts =================================================
platform win32 -- Python 3.7.3, pytest-5.1.2, py-1.8.0, pluggy-0.12.0
rootdir: D:\Personal Files\Python\pytest-chinese-doc\src
collected 1 item

test_nodeid.py test_one
.

================================================== 1 passed in 0.02s ==================================================

Execute the test_one use case in TestNodeId:

/d/Personal Files/Python/pytest-chinese-doc/src (5.1.2)
λ pytest -s test_nodeid.py::TestNodeId::test_one
================================================= test session starts =================================================
platform win32 -- Python 3.7.3, pytest-5.1.2, py-1.8.0, pluggy-0.12.0
rootdir: D:\Personal Files\Python\pytest-chinese-doc\src
collected 1 item

test_nodeid.py TestNodeId::test_one
.

================================================== 1 passed in 0.02s ==================================================

Execute the test_two use case in TestNodeId and specify the values of parameters x and y:

λ pytest -s test_nodeid.py::TestNodeId::test_two[3-4]
================================================= test session starts =================================================
platform win32 -- Python 3.7.3, pytest-5.1.2, py-1.8.0, pluggy-0.12.0
rootdir: D:\Personal Files\Python\pytest-chinese-doc\src
collected 1 item

test_nodeid.py TestNodeId::test_two::3 == 4
F

====================================================== FAILURES ======================================================= ______________________________________________ TestNodeId.test_two[3-4] _______________________________________________

self = <test_nodeid.TestNodeId object at 0x00000152613C49B0>, x = 3, y = 4

    @pytest.mark.parametrize('x,y', [(1, 1), (3, 4)])
    def test_two(self, x, y):
        print(f'TestNodeId::test_two::{x} == {y}')
>       assert x == y
E       assert 3 == 4

test_nodeid.py:37: AssertionError
================================================== 1 failed in 0.05s ==================================================

Here, the assignment of parameters x and y is in the form of [3-4], with an interval between them; the assignment of single or multiple parameters is analogous; and, only for [1-1] or [3-4], other errors will be reported;

Use cases for executing specified Tags

pytest -m slow

Execute all test cases marked with the @pytest.mark.slow decorator.

Execute test cases in specified packages

/d/Personal Files/Python/pytest-chinese-doc (5.1.2)
λ pytest --pyargs src

Execute all test cases in the src package.

Note: There should be a _init_ py file in the src folder.

Modify the output mode of backtracking information

The output of pytest backtracking information has six modes: auto/long/short/line/native/no, specified by the - tb option;

pytest -l, --showlocals         # Print local variables
pytest --tb=auto                # Default mode
pytest --tb=long                # Output as detailed as possible
pytest --tb=short               # Shorter output
pytest --tb=line                # Each failure message is summarized in one line
pytest --tb=native              # Standard output of python
pytest --tb=no                  # Do not print failure information

-- full-trace is a more detailed output mode than -- tb=long. It can even observe the retrospective information when the user interrupts execution (Ctrl+C), and the six modes do not output information by default.

Summary Report

- The r option prints a short summary report after execution. When many test cases are executed, you can have a clear understanding of the results:

# test_report.py

import pytest


@pytest.fixture
def error_fixture():
    assert 0


def test_ok():
    print("ok")


def test_fail():
    assert 0


def test_error(error_fixture):
    pass


def test_skip():
    pytest.skip("skipping this test")


def test_xfail():
    pytest.xfail("xfailing this test")


@pytest.mark.xfail(reason="always xfail")
def test_xpass():
    pass
/d/Personal Files/Python/pytest-chinese-doc/src (5.1.2)
λ pytest -ra test_report.py
================================================= test session starts =================================================
platform win32 -- Python 3.7.3, pytest-5.1.2, py-1.8.0, pluggy-0.12.0
rootdir: D:\Personal Files\Python\pytest-chinese-doc\src
collected 6 items

test_report.py .FEsxX                                                                                            [100%]

======================================================= ERRORS ======================================================== ____________________________________________ ERROR at setup of test_error _____________________________________________

    @pytest.fixture
    def error_fixture():
>       assert 0
E       assert 0

test_report.py:27: AssertionError
====================================================== FAILURES ======================================================= ______________________________________________________ test_fail ______________________________________________________

    def test_fail():
>       assert 0
E       assert 0

test_report.py:35: AssertionError
=============================================== short test summary info ===============================================
SKIPPED [1] D:\Personal Files\Python\pytest-chinese-doc\src\test_report.py:44: skipping this test
XFAIL test_report.py::test_xfail
  reason: xfailing this test
XPASS test_report.py::test_xpass always xfail
ERROR test_report.py::test_error - assert 0
FAILED test_report.py::test_fail - assert 0
======================== 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.10s ========================

- This parameter is immediately followed by the r option to filter and display the results of the test case. a represents all test cases except for success.

The following are all valid character parameters:

  • f: Failed
  • E: Wrong
  • s: skip execution
  • x: Skip execution and mark it xfailed
  • X: Skip execution and mark it xpassed
  • p: Passed
  • P: The test passes and the output information is available; that is, print is included in the use case, etc.
  • a: Everything except those passed by the test; that is, except for P and P.
  • A: All of them.

The above character parameters can be superimposed, such as: we expect to filter out failed and unexpected:

pytest -rfs

Loading on Failure PDB (Python Debugger) environment

PDB is python's built-in diagnostic. pytest allows you to enter the diagnostic mode when execution fails by following commands:

pytest --pdb

pytest calls the diagnostic when the test case fails (or Ctrl+C):

luyao@NJ-LUYAO-T460 /d/Personal Files/Python/pytest-chinese-doc/src (5.1.2)
λ pytest --pdb test_class.py
================================================= test session starts ================================================= 
platform win32 -- Python 3.7.3, pytest-5.1.2, py-1.8.0, pluggy-0.12.0
rootdir: D:\Personal Files\Python\pytest-chinese-doc\src
collected 2 items

test_class.py .F
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> traceback >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

self = <src.test_class.TestClass object at 0x00000269030ECB70>

    def test_two(self):
        x = 'hello'
>       assert hasattr(x, 'check')
E       AssertionError: assert False
E        +  where False = hasattr('hello', 'check')

test_class.py:30: AssertionError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> PDB post_mortem (IO-capturing turned off) >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
> d:\personal files\python\pytest-chinese-doc\src\test_class.py(30)test_two()
-> assert hasattr(x, 'check')
(Pdb) x
'hello'
(Pdb)

At this point, you can access the local variable x of the test case.

Also, failure information is stored in sys.last_value, sys.last_type, sys.last_traceback variables, which you can access in an interactive environment:

(Pdb) import sys
(Pdb) sys.last_value
AssertionError("assert False\n +  where False = hasattr('hello', 'check')")
(Pdb) sys.last_type
<class 'AssertionError'>
(Pdb) sys.last_traceback
<traceback object at 0x00000269030F0908>
(Pdb)

Load the PDB environment at the start of execution

With the following commands, pytest allows you to load the PDB environment when each test case is executed:

pytest --trace

Setting breakpoints

Add import pdb;pdb.set_trace() to the test case code, and when it is called, pytest stops the output of the use case:

  • Other use cases are unaffected;
  • Exit the PDB environment through the continue command and continue to execute the use case.

Using built-in interrupt functions

python 3.7 introduces a built-in breakpoint() function. pytest can be used in the following scenarios:

  • When breakpoint() is invoked and PYTHONBREAKPOINT is not set, pytest will replace the PDB of the system with an internal customized PDB.
  • At the end of test execution, the PDB is automatically cut back.
  • When the -- PDB option is added, breakpoint() and test errors occur, internal custom PDB is invoked.
  • The pdbcls option allows you to specify a user-defined PDB class;

Analysis of test execution time

Get 10 test cases with the slowest execution:

pytest --durations=10

By default, pytest does not display test cases with execution time < 0.01s, so you can view them with the - vv option.

Error Handle

New features in version 5.0

In the case of segment error or timeout in test execution, the standard module of faulthandler can dump python's backtracking information.

It is enabled by default in pytest execution and can be turned off using the - P no: fault handler option.

Similarly, the faulthandler_timeout=X configuration item can be used to dump python traceback information for all threads when the completion time of the test case exceeds X seconds.

# pytest.ini

[pytest]
faulthandler_timeout=5

The timeout time for configuring test execution is 5 seconds.

# test_fault_handler.py 

import time


def test_fault_handler():
    time.sleep(10)
    assert 1

Add an operation waiting 10 seconds in the test case.

yaomengdeMacBook-Air:src yaomeng$ pytest test_fault_handler.py 
============================================================ test session starts =============================================================
platform darwin -- Python 3.7.3, pytest-5.1.2, py-1.8.0, pluggy-0.12.0
rootdir: /Users/yaomeng/Private/Projects/pytest-chinese-doc/src, inifile: pytest.ini
plugins: repeat-0.8.0
collected 1 item                                                                                                                             

test_fault_handler.py Timeout (0:00:05)!
Thread 0x000000010d5895c0 (most recent call first):
  File "/Users/yaomeng/Private/Projects/pytest-chinese-doc/src/test_fault_handler.py", line 26 in test_fault_handler
  File "/usr/local/lib/python3.7/site-packages/_pytest/python.py", line 170 in pytest_pyfunc_call
  File "/usr/local/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
  File "/usr/local/lib/python3.7/site-packages/pluggy/manager.py", line 81 in <lambda>
  File "/usr/local/lib/python3.7/site-packages/pluggy/manager.py", line 87 in _hookexec
  File "/usr/local/lib/python3.7/site-packages/pluggy/hooks.py", line 289 in __call__
  File "/usr/local/lib/python3.7/site-packages/_pytest/python.py", line 1423 in runtest
  File "/usr/local/lib/python3.7/site-packages/_pytest/runner.py", line 117 in pytest_runtest_call
  File "/usr/local/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
  File "/usr/local/lib/python3.7/site-packages/pluggy/manager.py", line 81 in <lambda>
  File "/usr/local/lib/python3.7/site-packages/pluggy/manager.py", line 87 in _hookexec
  File "/usr/local/lib/python3.7/site-packages/pluggy/hooks.py", line 289 in __call__
  File "/usr/local/lib/python3.7/site-packages/_pytest/runner.py", line 192 in <lambda>
  File "/usr/local/lib/python3.7/site-packages/_pytest/runner.py", line 220 in from_call
  File "/usr/local/lib/python3.7/site-packages/_pytest/runner.py", line 192 in call_runtest_hook
  File "/usr/local/lib/python3.7/site-packages/_pytest/runner.py", line 167 in call_and_report
  File "/usr/local/lib/python3.7/site-packages/_pytest/runner.py", line 87 in runtestprotocol
  File "/usr/local/lib/python3.7/site-packages/_pytest/runner.py", line 72 in pytest_runtest_protocol
  File "/usr/local/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
  File "/usr/local/lib/python3.7/site-packages/pluggy/manager.py", line 81 in <lambda>
  File "/usr/local/lib/python3.7/site-packages/pluggy/manager.py", line 87 in _hookexec
  File "/usr/local/lib/python3.7/site-packages/pluggy/hooks.py", line 289 in __call__
  File "/usr/local/lib/python3.7/site-packages/_pytest/main.py", line 256 in pytest_runtestloop
  File "/usr/local/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
  File "/usr/local/lib/python3.7/site-packages/pluggy/manager.py", line 81 in <lambda>
  File "/usr/local/lib/python3.7/site-packages/pluggy/manager.py", line 87 in _hookexec
  File "/usr/local/lib/python3.7/site-packages/pluggy/hooks.py", line 289 in __call__
  File "/usr/local/lib/python3.7/site-packages/_pytest/main.py", line 235 in _main
  File "/usr/local/lib/python3.7/site-packages/_pytest/main.py", line 191 in wrap_session
  File "/usr/local/lib/python3.7/site-packages/_pytest/main.py", line 228 in pytest_cmdline_main
  File "/usr/local/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
  File "/usr/local/lib/python3.7/site-packages/pluggy/manager.py", line 81 in <lambda>
  File "/usr/local/lib/python3.7/site-packages/pluggy/manager.py", line 87 in _hookexec
  File "/usr/local/lib/python3.7/site-packages/pluggy/hooks.py", line 289 in __call__
  File "/usr/local/lib/python3.7/site-packages/_pytest/config/__init__.py", line 78 in main
  File "/usr/local/bin/pytest", line 10 in <module>
.                                                                                                                [100%]

============================================================= 1 passed in 10.02s =============================================================

Backtracking information is printed when execution is just over 5 seconds. However, the execution of the test will not be interrupted.

yaomengdeMacBook-Air:src yaomeng$ pytest -p no:faulthandler test_fault_handler.py 
============================================================ test session starts =============================================================
platform darwin -- Python 3.7.3, pytest-5.1.2, py-1.8.0, pluggy-0.12.0
rootdir: /Users/yaomeng/Private/Projects/pytest-chinese-doc/src, inifile: pytest.ini
plugins: repeat-0.8.0
collected 1 item 

test_fault_handler.py .                                                                                            [100%]

============================================================= 1 passed in 10.02s =============================================================

When faulthandler is enabled, timeouts do not trigger the printing of backtracking information.

Note: This function is from pytest-faulthandler Plug-ins are merged, but there are two differences:

  • To enable, use - P no: fault handler instead of the original - no - fault handler;
  • Use the faulthandler_timeout configuration item instead of the -- faulthandler-timeout command-line option to configure the timeout. Of course, you can also configure it on the command line using - o faulthandler_timeout=X.

Create JUnit XML format result report

Using the following command, you can create a can Jenkins Test result files read by other CI tools:

/d/Personal Files/Python/pytest-chinese-doc/src (5.1.2)
λ pytest test_class.py --junitxml=../report/test_class.xml

It creates an XML file in the specified directory:

<!--test_class.xml-->
<?xml version="1.0" encoding="utf-8"?>
<testsuites>
    <testsuite errors="0" failures="1" hostname="NJ-LUYAO-T460" name="pytest" skipped="0" tests="2" time="0.058"
        timestamp="2019-09-10T09:53:07.554001">
        <testcase classname="test_class.TestClass" file="test_class.py" line="23" name="test_one" time="0.001">
        </testcase>
        <testcase classname="test_class.TestClass" file="test_class.py" line="27" name="test_two" time="0.002">
            <failure message="AssertionError: assert False
 +  where False = hasattr(&apos;hello&apos;, &apos;check&apos;)">self = &lt;src.test_class.TestClass object at
                0x000001773E688C50&gt;

                def test_two(self):
                x = &apos;hello&apos;
                &gt; assert hasattr(x, &apos;check&apos;)
                E AssertionError: assert False
                E + where False = hasattr(&apos;hello&apos;, &apos;check&apos;)

                test_class.py:30: AssertionError</failure>
        </testcase>
    </testsuite>
</testsuites>

You can set junit_suite_name in the pytest.ini file to modify the name information of the testsuite root node in the xml file:

# pytest.ini

[pytest]
junit_suite_name = pytest_chinese_doc
/d/Personal Files/Python/pytest-chinese-doc/src (5.1.2)
pytest test_class.py test_sample.py --junitxml=../report/test_mysuitename.xml
<!--test_mysuitename.xml-->
<?xml version="1.0" encoding="utf-8"?>
<testsuites>
    <testsuite errors="0" failures="2" hostname="NJ-LUYAO-T460" name="pytest_chinese_doc" skipped="0" tests="3"
        time="0.070" timestamp="2019-09-10T10:07:29.078541">
        <testcase classname="test_class.TestClass" file="test_class.py" line="23" name="test_one" time="0.000">
        </testcase>
        <testcase classname="test_class.TestClass" file="test_class.py" line="27" name="test_two" time="0.001">
            <failure message="AssertionError: assert False
 +  where False = hasattr(&apos;hello&apos;, &apos;check&apos;)">self = &lt;src.test_class.TestClass object at
                0x000001C7F1155550&gt;

                def test_two(self):
                x = &apos;hello&apos;
                &gt; assert hasattr(x, &apos;check&apos;)
                E AssertionError: assert False
                E + where False = hasattr(&apos;hello&apos;, &apos;check&apos;)

                test_class.py:30: AssertionError</failure>
        </testcase>
        <testcase classname="test_sample" file="test_sample.py" line="26" name="test_sample" time="0.001">
            <failure message="assert 4 == 5
 +  where 4 = func(3)">def test_sample():
                &gt; assert func(3) == 5
                E assert 4 == 5
                E + where 4 = func(3)

                test_sample.py:28: AssertionError</failure>
        </testcase>
    </testsuite>
</testsuites>

As you can see, the name information of < test suite errors = "0" failures = "2" hostname = "NJ-LUYAO-T460" name = "pytest_chinese_doc" has changed from the default pytest to the expected pytest_chinese_doc;

Note: This is a new configuration item for version 4.0.

JUnit XML specifies that the time attribute should indicate all the time-consuming execution of the test case, including the operations in setup and teardown, which is also the default behavior of pytest; if you only want to record the execution time of the test case, you only need to configure as follows:

# pytest.ini

junit_duration_report = call

record_property

If you want to add some additional attribute nodes to a test case report, you can use record_property fixture:

# test_record_property.py

def test_record_property(record_property):
    record_property('example_key', 1)
    assert 1

We expect to add an example_key attribute node;

/d/Personal Files/Python/pytest-chinese-doc/src (5.1.2)
λ pytest test_record_property.py --junitxml=../report/test_record_property.xml
<!--test_record_property.xml-->
<?xml version="1.0" encoding="utf-8"?>
<testsuites>
    <testsuite errors="0" failures="0" hostname="NJ-LUYAO-T460" name="pytest_chinese_doc" skipped="0" tests="1"
        time="0.025" timestamp="2019-09-10T10:24:20.796619">
        <testcase classname="test_record_property" file="test_record_property.py" line="22" name="test_record_property"
            time="0.001">
            <properties>
                <property name="example_key" value="1" />
            </properties>
        </testcase>
    </testsuite>
</testsuites>

In this way, we add a new attribute of example_key="1" to the test_record_property use case.

Alternatively, you can integrate this functionality into a custom tag, such as allowing a test ID to be added to a test case:

# conftest.py

def pytest_collection_modifyitems(session, config, items):
    for item in items:
        for marker in item.iter_markers(name='test_id'):
            test_id = marker.args[0]
            item.user_properties.append(('test_id', test_id))

Mark a test ID for the following use cases:

# test_testid.py
import pytest


@pytest.mark.test_id(10010)
def test_record_property(record_property):
    record_property('example_key', 1)
    assert 1
/d/Personal Files/Python/pytest-chinese-doc/src (5.1.2)
λ pytest test_testid.py --junitxml=../report/test_testid.xml
================================================= test session starts ================================================= 
platform win32 -- Python 3.7.3, pytest-5.1.2, py-1.8.0, pluggy-0.12.0
rootdir: D:\Personal Files\Python\pytest-chinese-doc\src, inifile: pytest.ini
collected 1 item

test_testid.py .                                                                                                 [100%]

================================================== warnings summary =================================================== 
d:\program files\python\python37\lib\site-packages\_pytest\mark\structures.py:324
  d:\program files\python\python37\lib\site-packages\_pytest\mark\structures.py:324: PytestUnknownMarkWarning: Unknown pytest.mark.test_id - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/latest/mark.html
    PytestUnknownMarkWarning,

-- Docs: https://docs.pytest.org/en/latest/warnings.html
----------- generated xml file: D:\Personal Files\Python\pytest-chinese-doc\report\test_record_property.xml ----------- ============================================ 1 passed, 1 warnings in 0.03s ============================================

Here's an alert that we don't register a custom tag, but it doesn't affect execution. We just need to register in pytest.ini.

# pytest.ini

[pytest]
markers =
    test_id: Add for test cases ID
<!--test_testid.xml-->
<?xml version="1.0" encoding="utf-8"?>
<testsuites>
    <testsuite errors="0" failures="0" hostname="NJ-LUYAO-T460" name="pytest_chinese_doc" skipped="0" tests="1"
        time="0.025" timestamp="2019-09-10T10:46:50.694771">
        <testcase classname="test_testid" file="test_testid.py" line="24" name="test_record_property" time="0.002">
            <properties>
                <property name="test_id" value="10010" />
                <property name="example_key" value="1" />
            </properties>
        </testcase>
    </testsuite>
</testsuites>

As you can see, we added a new attribute of test_id="10010" for the test_record_property use case.

Note: This use may cause XML files to fail to conform to the latest JUnit XML schema checking rules, resulting in unknown errors in some CI tools;

record_xml_attribute

If you want to add a new attribute to the testcase node, you can use record_xml_attribute fixture. If this property already exists before, the original value will be overwritten with a new value:

# test_record_attribute.py

def test_record_property(record_xml_attribute):
    record_xml_attribute('example_key', 1)
    record_xml_attribute('classname', 'overwrite_classname')
    assert 1

Add a new example_key attribute and modify the classname attribute.

/d/Personal Files/Python/pytest-chinese-doc/src (5.1.2)
λ pytest test_record_attribute.py --junitxml=../report/test_record_attribute.xml
================================================= test session starts =================================================
platform win32 -- Python 3.7.3, pytest-5.1.2, py-1.8.0, pluggy-0.12.0
rootdir: D:\Personal Files\Python\pytest-chinese-doc\src, inifile: pytest.ini
collected 1 item

test_record_attribute.py .                                                                                       [100%]

================================================== warnings summary =================================================== test_record_attribute.py::test_record_attribute
  test_record_attribute.py:23: PytestExperimentalApiWarning: record_xml_attribute is an experimental feature
    def test_record_attribute(record_xml_attribute):

-- Docs: https://docs.pytest.org/en/latest/warnings.html
---------- generated xml file: D:\Personal Files\Python\pytest-chinese-doc\report\test_record_attribute.xml ----------- ============================================ 1 passed, 1 warnings in 0.03s ============================================
<!--test_record_attribute.xml-->
<?xml version="1.0" encoding="utf-8"?>
<testsuites>
    <testsuite errors="0" failures="0" hostname="NJ-LUYAO-T460" name="pytest_chinese_doc" skipped="0" tests="1"
        time="0.026" timestamp="2019-09-10T11:00:57.818851">
        <testcase classname="overwrite_classname" example_key="1" file="test_record_attribute.py" line="22"
            name="test_record_attribute" time="0.002"></testcase>
    </testsuite>
</testsuites>

As you can see, the < testcase classname= "overwrite_classname" example_key= "1" node's classname is replaced and the example_key attribute is added.

Note: record_xml_attribute is currently an experimental feature and may be replaced by a more powerful API in the future, but the function itself will be retained.

Note: This use may cause XML files to fail to conform to the latest JUnit XML schema checking rules, resulting in unknown errors in some CI tools;

record_testsuite_property

New Functions in Version 4.5

If you want to add attribute nodes at the TestSuite level and want them to work for all test cases, you can use record_testsuite_property fixture in session scope:

# conftest.py

@pytest.fixture(scope="session", autouse=True)
def log_global_env_facts(record_testsuite_property):
    record_testsuite_property("ARCH", "PPC")
    record_testsuite_property("STORAGE_TYPE", "CEPH")

record_testsuite_property receives two parameters name and value to form the < property > tag, in which name must be a string and value will be converted to a string and XML escaped.

# test_record_testsuite_property.py

class TestMe:
    def test_foo(self):
        assert True

log_global_env_facts is session level fixture, and autouse is True, so it is not necessary to introduce in advance.

/d/Personal Files/Python/pytest-chinese-doc/src (5.1.2)
λ pytest test_record_testsuite_property.py --junitxml=../report/test_record_testsuite_property.xml
<!--test_record_testsuite_property.xml-->
<?xml version="1.0" encoding="utf-8"?>
<testsuites>
    <testsuite errors="0" failures="0" hostname="NJ-LUYAO-T460" name="pytest_chinese_doc" skipped="0" tests="1"
        time="0.029" timestamp="2019-09-10T11:20:46.123978">
        <properties>
            <property name="ARCH" value="PPC" />
            <property name="STORAGE_TYPE" value="CEPH" />
        </properties>
        <testcase classname="test_record_testsuite_property.TestMe" file="test_record_testsuite_property.py" line="23"
            name="test_foo" time="0.001"></testcase>
    </testsuite>
</testsuites>

As you can see, in the testsuite node, a new property attribute node is added, which contains all the new properties, and it and the testcase node are brothers.

Note: The resulting XML file conforms to the latest xunit standard, which is the opposite of record_property and record_xml_attribute.

Provide an online pastebin presentation for test reports

At present, only the display function on http://bpaste.net has been realized.

Create a URL for each failed use case

luyao@NJ-LUYAO-T460 /d/Personal Files/Python/pytest-chinese-doc/src (5.1.2)
λ pytest test_class.py --pastebin=failed
================================================= test session starts =================================================
platform win32 -- Python 3.7.3, pytest-5.1.2, py-1.8.0, pluggy-0.12.0 -- d:\program files\python\python37\python.exe
rootdir: D:\Personal Files\Python\pytest-chinese-doc\src, inifile: pytest.ini
collected 2 items

test_class.py .F                                                                                                 [100%]

====================================================== FAILURES ======================================================= 
_________________________________________________ TestClass.test_two __________________________________________________

self = <src.test_class.TestClass object at 0x000001AB97F29208>

    def test_two(self):
        x = 'hello'
>       assert hasattr(x, 'check')
E       AssertionError: assert False
E        +  where False = hasattr('hello', 'check')

test_class.py:30: AssertionError
======================================== Sending information to Paste Service ========================================= 
test_class.py:30: AssertionError --> https://bpaste.net/show/HGeB
============================================= 1 failed, 1 passed in 5.64s =============================================

Visit https://bpaste.net/show/HGeB in the browser to see the failure information.

Create URL s for all test cases

luyao@NJ-LUYAO-T460 /d/Personal Files/Python/pytest-chinese-doc/src (5.1.2)
λ pytest test_class.py --pastebin=all
================================================= test session starts =================================================
platform win32 -- Python 3.7.3, pytest-5.1.2, py-1.8.0, pluggy-0.12.0 -- d:\program files\python\python37\python.exe
rootdir: D:\Personal Files\Python\pytest-chinese-doc\src, inifile: pytest.ini
collected 2 items

test_class.py .F                                                                                                 [100%]

====================================================== FAILURES ======================================================= 
_________________________________________________ TestClass.test_two __________________________________________________

self = <src.test_class.TestClass object at 0x00000218030F91D0>

    def test_two(self):
        x = 'hello'
>       assert hasattr(x, 'check')
E       AssertionError: assert False
E        +  where False = hasattr('hello', 'check')

test_class.py:30: AssertionError
============================================= 1 failed, 1 passed in 0.06s ============================================= 
======================================== Sending information to Paste Service =========================================
pastebin session-log: https://bpaste.net/show/XdVT

Visit https://bpaste.net/show/XdVT in the browser to see the execution information of all the cases.

Loading plug-ins as early as possible

You can use the - p option on the command line to load a plug-in as early as possible:

pytest -p mypluginmodule

- The p option receives a name parameter, which can be:

  • A complete local plug-in, such as myproject.plugins, must be import able.
  • The name of a public plug-in, which is given in setuptools when it is registered, for example: pytest-cov

To enable plug-ins

You can use - p in conjunction with no:, on the command line, to enable the loading of a plug-in, for example:

pytest -p no:doctest

Call pytest in python code

You can call pytest directly in your code:

pytest.main()

This is the same as executing pytest on the command line, but it does not trigger SystemExit, but returns exitcode:

# invoke_with_main.py

import time


def test_one():
    time.sleep(10)
    assert True


if __name__ == '__main__':
    import pytest
    ret = pytest.main([__file__])
    print(ret == pytest.ExitCode.INTERRUPTED)

There are operations waiting for 10 seconds in the use case. When we interrupt execution (Ctr+C) during this period, we expect pytest.main() to return the status code INTERRUPTED, or 2;

/d/Personal Files/Python/pytest-chinese-doc/src (5.1.2)
λ python invoke_with_main.py
================================================= test session starts =================================================
platform win32 -- Python 3.7.3, pytest-5.1.2, py-1.8.0, pluggy-0.12.0
rootdir: D:\Personal Files\Python\pytest-chinese-doc\src, inifile: pytest.ini
collected 1 item

invoke_with_main.py

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! KeyboardInterrupt !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
d:\personal files\python\pytest-chinese-doc\src\invoke_with_main.py:26: KeyboardInterrupt
(to show a full traceback on KeyboardInterrupt use --full-trace)
================================================ no tests ran in 2.11s ================================================
True

As you can see, pytest.main() does return the status code of INTERRUPTED.

You can also pass options and parameters:

pytest.main(["-x", "mytestdir"])

Similarly, you can specify a custom plug-in for pytest.main():

# invoke_with_plugins.py

class MyPlugin:
    def pytest_sessionfinish(self):
        print('*** test run reporting finishing')


if __name__ == '__main__':
    import pytest
    pytest.main(['test_class.py'], plugins=[MyPlugin()])

When you execute this file, you will find that the plug-in is functioning properly, and its hook method is also called:

/d/Personal Files/Python/pytest-chinese-doc/src (5.1.2)
λ python invoke_with_plugins.py
================================================= test session starts =================================================
platform win32 -- Python 3.7.3, pytest-5.1.2, py-1.8.0, pluggy-0.12.0
rootdir: D:\Personal Files\Python\pytest-chinese-doc\src, inifile: pytest.ini
collected 2 items

test_class.py .F                                                                                                 [100%]
*** test run reporting finishing


====================================================== FAILURES ======================================================= 
_________________________________________________ TestClass.test_two __________________________________________________

self = <src.test_class.TestClass object at 0x00000236949A20B8>

    def test_two(self):
        x = 'hello'
>       assert hasattr(x, 'check')
E       AssertionError: assert False
E        +  where False = hasattr('hello', 'check')

test_class.py:30: AssertionError
============================================= 1 failed, 1 passed in 0.06s =============================================

Note: Calling pytest.main() will introduce your test file and all the modules it references. Because of the caching feature introduced by python, when these files change, subsequent calls to pytest.main() (during the execution of the same program) will not respond to the changes in these files. For this reason, we do not recommend calling pytest.main() multiple times in the same program (for example, to re-execute the test; if you do have this requirement, you might consider pytest-repeat Plug-in;

Posted by wiggly81 on Tue, 10 Sep 2019 01:46:54 -0700