3. pytest Chinese Document--Writing Assertions

Keywords: Python Session Lambda

Catalog

Writing assertions

Writing assertions using assert

pytest allows you to write assertions using python standard assert expressions;

For example, you can do this:

# test_sample.py

def func(x):
    return x + 1


def test_sample():
    assert func(3) == 5

If this assertion fails, you will see the actual return value of func(3):

/d/Personal Files/Python/pytest-chinese-doc/src (5.1.2)
λ pytest test_sample.py
================================================= test session starts =================================================
platform win32 -- Python 3.7.3, pytest-5.1.2, py-1.8.0, pluggy-0.12.0
rootdir: D:\Personal Files\Python\pytest-chinese-doc\src, inifile: pytest.ini
collected 1 item

test_sample.py F                                                                                                 [100%]

====================================================== FAILURES ======================================================= _____________________________________________________ test_sample _____________________________________________________

    def test_sample():
>       assert func(3) == 5
E       assert 4 == 5
E        +  where 4 = func(3)

test_sample.py:28: AssertionError
================================================== 1 failed in 0.05s ================================================== 

pytest supports displaying the values of common python subexpressions, including calls, attributes, comparisons, binary and unary operators, etc. (Reference) Demonstration of reports when python fails supported by pytest);

This allows you to use python's data structure without reference to template code without worrying about losing self-reflection.

At the same time, you can specify a description for the assertion to be used in the case of failure:

assert a % 2 == 0, "value was odd, should be even"

Write assertions that trigger expected exceptions

You can use pytest.raises() as a context manager to write an assertion that triggers the expected exception:

import pytest


def myfunc():
    raise ValueError("Exception 123 raised")


def test_match():
    with pytest.raises(ValueError):
        myfunc()

The assertion fails when the use case does not return ValueError or there is no exception return.

If you want to access the properties of the exception at the same time, you can do this:

import pytest


def myfunc():
    raise ValueError("Exception 123 raised")


def test_match():
    with pytest.raises(ValueError) as excinfo:
        myfunc()
    assert '123' in str(excinfo.value)

Among them, excinfo is an example of ExceptionInfo, which encapsulates exceptional information; commonly used attributes include: type,. value and. traceback;

Note: In the context manager scope, raises code must be the last line, otherwise, the following code will not execute; therefore, if the above example changes to:

def test_match():
   with pytest.raises(ValueError) as excinfo:
      myfunc()
      assert '456' in str(excinfo.value)

The test will always succeed, because assert'456'in str (excinfo. value) will not be executed;

You can also pass pytest.raises() a keyword parameter match to test whether the exception string representation str(excinfo.value) conforms to a given regular expression (similar to the TestCase. assertRaises Regexp method in unittest):

import pytest


def myfunc():
    raise ValueError("Exception 123 raised")


def test_match():
    with pytest.raises((ValueError, RuntimeError), match=r'.* 123 .*'):
        myfunc()

Pytest actually calls the re.search() method to do the above checks, and pytest.raises() also supports checking multiple expected exceptions (passing parameters as tuples), we only need to trigger one of them;

pytest.raises has another form of use:

First, let's look at its definition in source code:

# _pytest/python_api.py

def raises(  # noqa: F811
    expected_exception: Union["Type[_E]", Tuple["Type[_E]", ...]],
    *args: Any,
    match: Optional[Union[str, "Pattern"]] = None,
    **kwargs: Any
) -> Union["RaisesContext[_E]", Optional[_pytest._code.ExceptionInfo[_E]]]:

It receives a position parameter expected_exception, a set of variable parameters args, a keyword parameter match and a set of keyword parameters kwargs.

Then look down:

# _pytest/python_api.py

    if not args:
        if kwargs:
            msg = "Unexpected keyword arguments passed to pytest.raises: "
            msg += ", ".join(sorted(kwargs))
            msg += "\nUse context-manager form instead?"
            raise TypeError(msg)
        return RaisesContext(expected_exception, message, match)
    else:
        func = args[0]
        if not callable(func):
            raise TypeError(
                "{!r} object (type: {}) must be callable".format(func, type(func))
            )
        try:
            func(*args[1:], **kwargs)
        except expected_exception as e:
            # We just caught the exception - there is a traceback.
            assert e.__traceback__ is not None
            return _pytest._code.ExceptionInfo.from_exc_info(
                (type(e), e, e.__traceback__)
            )
    fail(message)

If args exists, its first parameter must be a callable object, otherwise it will report a TypeError exception; at the same time, it will pass the remaining args parameters and all kwargs parameters to the callable object, and then check whether the specified exception is triggered after the object is executed;

So we have a new way of writing:

pytest.raises(ZeroDivisionError, lambda x: 1/x, 0)

# perhaps

pytest.raises(ZeroDivisionError, lambda x: 1/x, x=0)

At this point, if you pass the match parameter again, it will not work because it will only work if not args:

In addition, pytest.mark.xfail() can also receive a raises parameter to determine whether a use case fails because of a specific exception:

@pytest.mark.xfail(raises=IndexError)
def test_f():
    f()

If f() triggers an IndexError exception, the use case is marked xfailed; if not, f() is executed normally;

Note: If the f() test succeeds, the result of the use case is xpass, not pass.

pytest.raises is suitable for checking for exceptions deliberately thrown by code, while @pytest.mark.xfail() is more suitable for recording some bugs that have not been repaired.

Optimization of special data structure comparison

# test_special_compare.py

def test_set_comparison():
    set1 = set('1308')
    set2 = set('8035')
    assert set1 == set2


def test_long_str_comparison():
    str1 = 'show me codes'
    str2 = 'show me money'
    assert str1 == str2


def test_dict_comparison():
    dict1 = {
        'x': 1,
        'y': 2,
    }
    dict2 = {
        'x': 1,
        'y': 1,
    }
    assert dict1 == dict2

Above, we examined the comparison of three data structures: sets, strings and dictionaries.

/d/Personal Files/Python/pytest-chinese-doc/src (5.1.2)
λ pytest test_special_compare.py
================================================= test session starts =================================================
platform win32 -- Python 3.7.3, pytest-5.1.2, py-1.8.0, pluggy-0.12.0
rootdir: D:\Personal Files\Python\pytest-chinese-doc\src, inifile: pytest.ini
collected 3 items

test_special_compare.py FFF                                                                                      [100%]

====================================================== FAILURES ======================================================= _________________________________________________ test_set_comparison _________________________________________________

    def test_set_comparison():
        set1 = set('1308')
        set2 = set('8035')
>       assert set1 == set2
E       AssertionError: assert {'0', '1', '3', '8'} == {'0', '3', '5', '8'}
E         Extra items in the left set:
E         '1'
E         Extra items in the right set:
E         '5'
E         Use -v to get the full diff

test_special_compare.py:26: AssertionError
______________________________________________ test_long_str_comparison _______________________________________________

    def test_long_str_comparison():
        str1 = 'show me codes'
        str2 = 'show me money'
>       assert str1 == str2
E       AssertionError: assert 'show me codes' == 'show me money'
E         - show me codes
E         ?         ^ ^ ^
E         + show me money
E         ?         ^ ^ ^

test_special_compare.py:32: AssertionError
________________________________________________ test_dict_comparison _________________________________________________

    def test_dict_comparison():
        dict1 = {
            'x': 1,
            'y': 2,
        }
        dict2 = {
            'x': 1,
            'y': 1,
        }
>       assert dict1 == dict2
E       AssertionError: assert {'x': 1, 'y': 2} == {'x': 1, 'y': 1}
E         Omitting 1 identical items, use -vv to show
E         Differing items:
E         {'y': 2} != {'y': 1}
E         Use -v to get the full diff

test_special_compare.py:44: AssertionError
================================================== 3 failed in 0.09s ==================================================

For the comparison of some special data structures, pytest optimizes the display of the results.

  • Sets, lists, etc. Mark the first different element.
  • String: Mark different parts;
  • Dictionary: Mark different entries;

More examples for reference Demonstration of reports when python fails supported by pytest

Add custom instructions for failed assertions

# test_foo_compare.py

class Foo:
    def __init__(self, val):
        self.val = val

    def __eq__(self, other):
        return self.val == other.val
    
    
def test_foo_compare():
    f1 = Foo(1)
    f2 = Foo(2)
    assert f1 == f2

We define a Foo object and override its _eq_() method, but when we execute this use case:

/d/Personal Files/Python/pytest-chinese-doc/src (5.1.2)
λ pytest test_foo_compare.py
================================================= test session starts =================================================
platform win32 -- Python 3.7.3, pytest-5.1.2, py-1.8.0, pluggy-0.12.0
rootdir: D:\Personal Files\Python\pytest-chinese-doc\src, inifile: pytest.ini
collected 1 item

test_foo_compare.py F                                                                                            [100%]

====================================================== FAILURES ======================================================= __________________________________________________ test_foo_compare ___________________________________________________

    def test_foo_compare():
        f1 = Foo(1)
        f2 = Foo(2)
>       assert f1 == f2
E       assert <src.test_foo_compare.Foo object at 0x0000020E90C4E978> == <src.test_foo_compare.Foo object at 0x0000020E90C4E630>

test_foo_compare.py:37: AssertionError
================================================== 1 failed in 0.04s ==================================================

It is not intuitive to see the reasons for the failure.

In this case, we have two ways to solve the problem:

  • The _repr_() method for copying Foo:
def __repr__(self):
    return str(self.val)

Let's execute the use case again:

luyao@NJ-LUYAO-T460 /d/Personal Files/Python/pytest-chinese-doc/src (5.1.2)
λ pytest test_foo_compare.py
================================================= test session starts =================================================
platform win32 -- Python 3.7.3, pytest-5.1.2, py-1.8.0, pluggy-0.12.0
rootdir: D:\Personal Files\Python\pytest-chinese-doc\src, inifile: pytest.ini
collected 1 item

test_foo_compare.py F                                                                                            [100%]

====================================================== FAILURES ======================================================= __________________________________________________ test_foo_compare ___________________________________________________

    def test_foo_compare():
        f1 = Foo(1)
        f2 = Foo(2)
>       assert f1 == f2
E       assert 1 == 2

test_foo_compare.py:37: AssertionError
================================================== 1 failed in 0.06s ==================================================

At this time, we can see that the reason for failure is that 1 = 2 is not established.

As for the difference between _str_() and _repr_(), you can refer to the answer in the question on StackFlow: https://stackoverflow.com/questions/1436703/difference-between-str-and-repr.

  • Add custom failure instructions using the hook method pytest_assertrepr_compare
# conftest.py

from .test_foo_compare import Foo


def pytest_assertrepr_compare(op, left, right):
    if isinstance(left, Foo) and isinstance(right, Foo) and op == "==":
        return [
            "Compare two Foo Example:",  # Write a summary at the top
            "   value: {} != {}".format(left.val, right.val),  # All but the first line can be indented.
        ]

Re-implement:

/d/Personal Files/Python/pytest-chinese-doc/src (5.1.2)
λ pytest test_foo_compare.py
================================================= test session starts =================================================
platform win32 -- Python 3.7.3, pytest-5.1.2, py-1.8.0, pluggy-0.12.0
rootdir: D:\Personal Files\Python\pytest-chinese-doc\src, inifile: pytest.ini
collected 1 item

test_foo_compare.py F                                                                                            [100%]

====================================================== FAILURES ======================================================= __________________________________________________ test_foo_compare ___________________________________________________

    def test_foo_compare():
        f1 = Foo(1)
        f2 = Foo(2)
>       assert f1 == f2
E       assert Compare two Foo Example:
E            value: 1 != 2

test_foo_compare.py:37: AssertionError
================================================== 1 failed in 0.05s ==================================================

We will see a more friendly explanation of failure.

Details of assertion introspection

When an assertion fails, pytest provides us with a very humanized explanation of failure, which is often mixed with introspective information of corresponding variables, which we call assertion introspection.

So how does pytest do this?

  • Pytest finds test modules and introduces them, while pytest replicates assertion sentences and adds introspective information; however, assertion sentences that are not test modules are not replicated;

Overwrite Cache Files

pytest stores the duplicated modules locally as caches. You can add the following configuration in conftest.py in the root folder of the test case:

import sys

sys.dont_write_bytecode = True

To prohibit such acts;

However, it won't prevent you from enjoying the benefits of assertion introspection, just won't store. pyc files locally.

To be able to assert self-reflection

You can do it in two ways:

  • PYTEST_DONT_REWRITE string is added to the docstring that needs to be enabled.
  • When pytest is executed, add the -- assert=plain option.

Let's take a look at the effect of enabling:

/d/Personal Files/Python/pytest-chinese-doc/src (5.1.2)
λ pytest test_foo_compare.py --assert=plain
================================================= test session starts =================================================
platform win32 -- Python 3.7.3, pytest-5.1.2, py-1.8.0, pluggy-0.12.0
rootdir: D:\Personal Files\Python\pytest-chinese-doc\src, inifile: pytest.ini
collected 1 item

test_foo_compare.py F                                                                                            [100%]

====================================================== FAILURES ======================================================= __________________________________________________ test_foo_compare ___________________________________________________

    def test_foo_compare():
        f1 = Foo(1)
        f2 = Foo(2)
>       assert f1 == f2
E       AssertionError

test_foo_compare.py:37: AssertionError
================================================== 1 failed in 0.05s ==================================================

When assertions fail, the information is very incomplete, and we can hardly see any useful Debug information.

Posted by if on Wed, 11 Sep 2019 02:04:23 -0700