One Python standard library per week

Keywords: github Python REST Database

Technology blog: https://github.com/yongxinz/tech-blog

At the same time, welcome to pay attention to my WeChat public number AlwaysBeta, more wonderful content waiting for you to come.

unittest is Python's own unit test framework, which can be used to organize and execute use cases of automated test framework.

Advantages: provide use case organization and execution methods; provide comparison methods; provide rich logs and clear reports.

How unittest works

The core parts of unittest are: TestFixture, TestCase, TestSuite and TestRunner.

Let's explain the meaning of these four concepts respectively. First, let's look at a static class diagram of unittest:

[failed to transfer the pictures in the external chain. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-i73bqgc5-157960145102) (D: \ work \ article \ unittest. PNG))

  • An instance of TestCase is a test case. What are test cases? It is a complete test process, including the setUp of pre test preparation environment, the execution of test code (run), and the restoration of post test environment (tearDown). The essence of unit test is here. A test case is a complete test unit. By running this test unit, you can verify a problem.
  • Multiple test cases are grouped together, namely TestSuite, and TestSuite can also nest TestSuite.
  • TestLoader is used to load TestCase into TestSuite. There are several loadTestsFrom methods. They are to find TestCase from various places, create their instances, add them to TestSuite, and then return a TestSuite instance.
  • TextTestRunner is used to execute test cases, where run(test) executes the run(result) method in TestSuite/TestCase. The test results will be saved to the TextTestResul t instance, including how many test cases have been run, how many have succeeded, how many have failed and so on.
  • The construction and destruction of a test case environment is a Fixture.

A class inherits unittest.TestCase, which is a test case. However, if there are multiple methods starting with test, each such method will generate a TestCase instance when loading. For example, there are four test ﹣ XXX methods in a class, and finally there are four test cases when loading into suite.

The whole process is clear here:

  • Write TestCase.
  • Load TestCase to TestSuite from TestLoader.
  • The TestSuite is then run by TextTestRunner and the results are saved in TextTestResult.
    When executing through the command line or unittest.main(), main() will call run() in TextTestRunner to execute, or you can directly execute the use case through TextTestRunner.
  • When the Runner executes, the execution result will be output to the console by default. We can set it to output to a file and view the result in the file (you may have heard of HTMLTestRunner, yes, it can output the result to HTML and generate a beautiful report. It is the same as TextTestRunner, which can be seen from the name, we will talk about later).

unittest instance

Let's take a closer look at unittest with some examples.

Write TestCase

Prepare the method to be tested first, as follows:

# mathfunc.py

# -*- coding: utf-8 -*-

def add(a, b):
    return a+b

def minus(a, b):
    return a-b

def multi(a, b):
    return a*b

def divide(a, b):
    return a/b

Write TestCase as follows:

# test_mathfunc.py

# -*- coding: utf-8 -*-

import unittest
from mathfunc import *


class TestMathFunc(unittest.TestCase):
    """Test mathfuc.py"""

    def test_add(self):
        """Test method add(a, b)"""
        self.assertEqual(3, add(1, 2))
        self.assertNotEqual(3, add(2, 2))

    def test_minus(self):
        """Test method minus(a, b)"""
        self.assertEqual(1, minus(3, 2))

    def test_multi(self):
        """Test method multi(a, b)"""
        self.assertEqual(6, multi(2, 3))

    def test_divide(self):
        """Test method divide(a, b)"""
        self.assertEqual(2, divide(6, 3))
        self.assertEqual(2.5, divide(5, 2))

if __name__ == '__main__':
    unittest.main()

Execution result:

.F..
======================================================================
FAIL: test_divide (__main__.TestMathFunc)
Test method divide(a, b)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "D:/py/test_mathfunc.py", line 26, in test_divide
    self.assertEqual(2.5, divide(5, 2))
AssertionError: 2.5 != 2

----------------------------------------------------------------------
Ran 4 tests in 0.000s

FAILED (failures=1)

It can be seen that 4 tests have been run and 1 test has failed, and the failure reason is given. 2.5! = 2 means that our divide method has problems.

This is a simple test. There are several points to be explained:

  1. In the first line, the identification of the execution result of each use case is given. The success is.. the failure is F, the error is E, and the skip is S. It can also be seen from the above that the execution of the test has nothing to do with the order of the methods. Test "is written in the fourth, but it is executed in the second.
  2. Each test method starts with test, otherwise it is not recognized by unittest.
  3. The verbosity parameter in unittest.main() can control the detail level of the output error report. The default value is 1. If it is set to 0, the execution result of each use case will not be output, that is, there is no line 1 in the above result. If it is set to 2, the detailed execution result will be output, as follows:
test_add (__main__.TestMathFunc)
Test method add(a, b) ... ok
test_divide (__main__.TestMathFunc)
Test method divide(a, b) ... FAIL
test_minus (__main__.TestMathFunc)
Test method minus(a, b) ... ok
test_multi (__main__.TestMathFunc)
Test method multi(a, b) ... ok

======================================================================
FAIL: test_divide (__main__.TestMathFunc)
Test method divide(a, b)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "D:/py/test_mathfunc.py", line 26, in test_divide
    self.assertEqual(2.5, divide(5, 2))
AssertionError: 2.5 != 2

----------------------------------------------------------------------
Ran 4 tests in 0.002s

FAILED (failures=1)

It can be seen that the detailed execution of each use case, the name of the use case, and the description of the use case are all output (add the "Doc String" in the code example under the test method, when the use case is executed, the string will be used as the description of the use case, and appropriate comments can make the output test report easier to read).

Organize TestSuite

The above code demonstrates how to write A simple test, but there are two questions, how do we control the order of execution of use cases? (several test methods in the examples here do not have A certain relationship, but the use cases you write later may have an order relationship. You need to execute method A first, then method B). We will use TestSuite. The cases we add to TestSuite are executed in the order in which they were added.

The second problem is that we only have one test file now. We can execute the file directly. However, if there are multiple test files, how to organize them cannot be executed one by one. The answer is also in TestSuite.

Here is an example:

In the folder, we will create a new file, test suite.py:

# -*- coding: utf-8 -*-

import unittest
from test_mathfunc import TestMathFunc

if __name__ == '__main__':
    suite = unittest.TestSuite()

    tests = [TestMathFunc("test_add"), TestMathFunc("test_minus"), TestMathFunc("test_divide")]
    suite.addTests(tests)

    runner = unittest.TextTestRunner(verbosity=2)
    runner.run(suite)

Execution result:

test_add (test_mathfunc.TestMathFunc)
Test method add(a, b) ... ok
test_minus (test_mathfunc.TestMathFunc)
Test method minus(a, b) ... ok
test_divide (test_mathfunc.TestMathFunc)
Test method divide(a, b) ... FAIL

======================================================================
FAIL: test_divide (test_mathfunc.TestMathFunc)
Test method divide(a, b)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "D:\py\test_mathfunc.py", line 26, in test_divide
    self.assertEqual(2.5, divide(5, 2))
AssertionError: 2.5 != 2

----------------------------------------------------------------------
Ran 3 tests in 0.001s

FAILED (failures=1)

As you can see, the execution is the same as we expected: three case s are executed in the order we added them to suite.

The addTests() method of TestSuite is used above, and the TestCase list is directly passed in. We can also:

# Add a single TestCase directly with the addTest method
suite.addTest(TestMathFunc("test_multi"))

# Using addTests + TestLoader
# loadTestsFromName(), passing in 'module name. TestCase name'
suite.addTests(unittest.TestLoader().loadTestsFromName('test_mathfunc.TestMathFunc'))
suite.addTests(unittest.TestLoader().loadTestsFromNames(['test_mathfunc.TestMathFunc']))  # loadTestsFromNames(), similar, passing in the list

# loadTestsFromTestCase(), passed in TestCase
suite.addTests(unittest.TestLoader().loadTestsFromTestCase(TestMathFunc))

Note that the method of TestLoader cannot be used to sort case s. At the same time, suite can also be used in suite.

TestLoader and output the result

The use case is organized, but the result can only be output to the console, so there is no way to view the previous execution record. We want to output the result to a file. Let's look at an example:

Modify test suite.py:

# -*- coding: utf-8 -*-

import unittest
from test_mathfunc import TestMathFunc

if __name__ == '__main__':
    suite = unittest.TestSuite()
    suite.addTests(unittest.TestLoader().loadTestsFromTestCase(TestMathFunc))

    with open('UnittestTextReport.txt', 'a') as f:
        runner = unittest.TextTestRunner(stream=f, verbosity=2)
        runner.run(suite)

When executing this file, you can see that UnittestTextReport.txt is generated in the same directory, and all the execution reports are output to this file, so we have the test report in the format of TXT.

But the text report is too simple. Do you want HTML report on a higher level? But unittest doesn't have HTML reports, so we have to resort to external libraries.

HTMLTestRunner is a third-party unit test HTML report library. We download HTMLTestRunner.py and import it to run.

Official address: http://tungwaiyip.info/software/HTMLTestRunner.html

Modify our test suite.py:

# -*- coding: utf-8 -*-

import unittest
from test_mathfunc import TestMathFunc
from HTMLTestRunner import HTMLTestRunner

if __name__ == '__main__':
    suite = unittest.TestSuite()
    suite.addTests(unittest.TestLoader().loadTestsFromTestCase(TestMathFunc))

    with open('HTMLReport.html', 'w') as f:
        runner = HTMLTestRunner(stream=f,
                                title='MathFunc Test Report',
                                description='generated by HTMLTestRunner.',
                                verbosity=2
                                )
        runner.run(suite)

In this way, when executing, we can see the execution status on the console, as follows:

ok test_add (test_mathfunc.TestMathFunc)
F  test_divide (test_mathfunc.TestMathFunc)
ok test_minus (test_mathfunc.TestMathFunc)
ok test_multi (test_mathfunc.TestMathFunc)

Time Elapsed: 0:00:00.001000

And output the HTML test report, HTMLReport.html.

Here's the beautiful HTML report. In fact, you can find that the execution method of HTMLTestRunner is very similar to TextTestRunner. You can compare it with the above example, which is to replace the runner in the class diagram with HTMLTestRunner and show the TestResult in the form of HTML. If you study deeply enough, you can write your own runner and generate more complex and beautiful reports.

TestFixture prepare and clean environment

The whole test above basically runs down, but there may be some special situations: what if my test needs to prepare the environment before each execution, or needs some cleaning after each execution? For example, you need to connect to the database before execution, and you need to restore data and disconnect after execution. You can't add code to prepare environment and clean environment in every test method.

This involves the test fixture we mentioned earlier. Modify test_mathfunc.py:

# -*- coding: utf-8 -*-

import unittest
from mathfunc import *


class TestMathFunc(unittest.TestCase):
    """Test mathfuc.py"""

    def setUp(self):
        print "do something before test.Prepare environment."

    def tearDown(self):
        print "do something after test.Clean up."

    def test_add(self):
        """Test method add(a, b)"""
        print "add"
        self.assertEqual(3, add(1, 2))
        self.assertNotEqual(3, add(2, 2))

    def test_minus(self):
        """Test method minus(a, b)"""
        print "minus"
        self.assertEqual(1, minus(3, 2))

    def test_multi(self):
        """Test method multi(a, b)"""
        print "multi"
        self.assertEqual(6, multi(2, 3))

    def test_divide(self):
        """Test method divide(a, b)"""
        print "divide"
        self.assertEqual(2, divide(6, 3))
        self.assertEqual(2.5, divide(5, 2))

We have added two methods, setUp() and tearDown() (in fact, we have rewritten these two methods of TestCase). These two methods are executed once before and after each test method is executed. setUp is used to prepare the environment for the test, tearDown is used to clean the environment and prepare the test after it has been executed.

Let's do it again:

test_add (test_mathfunc.TestMathFunc)
Test method add(a, b) ... ok
test_divide (test_mathfunc.TestMathFunc)
Test method divide(a, b) ... FAIL
test_minus (test_mathfunc.TestMathFunc)
Test method minus(a, b) ... ok
test_multi (test_mathfunc.TestMathFunc)
Test method multi(a, b) ... ok

======================================================================
FAIL: test_divide (test_mathfunc.TestMathFunc)
Test method divide(a, b)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "D:\py\test_mathfunc.py", line 36, in test_divide
    self.assertEqual(2.5, divide(5, 2))
AssertionError: 2.5 != 2

----------------------------------------------------------------------
Ran 4 tests in 0.000s

FAILED (failures=1)
do something before test.Prepare environment.
add
do something after test.Clean up.
do something before test.Prepare environment.
divide
do something after test.Clean up.
do something before test.Prepare environment.
minus
do something after test.Clean up.
do something before test.Prepare environment.
multi
do something after test.Clean up.

You can see that setUp and tearDown execute once before and after each case.

If you want to prepare the environment once before the execution of all cases, and clean up the environment after the execution of all cases, we can use setUpClass() and tearDownClass():

...

class TestMathFunc(unittest.TestCase):
    """Test mathfuc.py"""

    @classmethod
    def setUpClass(cls):
        print "This setUpClass() method only called once."

    @classmethod
    def tearDownClass(cls):
        print "This tearDownClass() method only called once too."

...

The results are as follows:

...
This setUpClass() method only called once.
do something before test.Prepare environment.
add
...
multi
do something after test.Clean up.
This tearDownClass() method only called once too.

You can see that both setUpClass and teardown class are executed only once.

Some useful methods

Assert

Most tests assert the authenticity of certain conditions. There are two different ways to write truth checking tests, depending on the opinions of the test author and the expected results of the code being tested.

# unittest_truth.py

import unittest


class TruthTest(unittest.TestCase):

    def testAssertTrue(self):
        self.assertTrue(True)

    def testAssertFalse(self):
        self.assertFalse(False)

If the value generated by the code is true, the assertTrue() method should be used. If the code produces a value of false, the method assertFalse() is more meaningful.

$ python3 -m unittest -v unittest_truth.py

testAssertFalse (unittest_truth.TruthTest) ... ok
testAssertTrue (unittest_truth.TruthTest) ... ok

----------------------------------------------------------------
Ran 2 tests in 0.000s

OK

Test equality

unittest includes the following methods to test the equality of two values:

# unittest_equality.py 

import unittest


class EqualityTest(unittest.TestCase):

    def testExpectEqual(self):
        self.assertEqual(1, 3 - 2)

    def testExpectEqualFails(self):
        self.assertEqual(2, 3 - 2)

    def testExpectNotEqual(self):
        self.assertNotEqual(2, 3 - 2)

    def testExpectNotEqualFails(self):
        self.assertNotEqual(1, 3 - 2)

When it fails, these special test methods generate error messages, including the values being compared.

$ python3 -m unittest -v unittest_equality.py

testExpectEqual (unittest_equality.EqualityTest) ... ok
testExpectEqualFails (unittest_equality.EqualityTest) ... FAIL
testExpectNotEqual (unittest_equality.EqualityTest) ... ok
testExpectNotEqualFails (unittest_equality.EqualityTest) ... FAIL

================================================================
FAIL: testExpectEqualFails (unittest_equality.EqualityTest)
----------------------------------------------------------------
Traceback (most recent call last):
  File ".../unittest_equality.py", line 15, in
testExpectEqualFails
    self.assertEqual(2, 3 - 2)
AssertionError: 2 != 1

================================================================
FAIL: testExpectNotEqualFails (unittest_equality.EqualityTest)
----------------------------------------------------------------
Traceback (most recent call last):
  File ".../unittest_equality.py", line 21, in
testExpectNotEqualFails
    self.assertNotEqual(1, 3 - 2)
AssertionError: 1 == 1

----------------------------------------------------------------
Ran 4 tests in 0.001s

FAILED (failures=2)

scarcely less

In addition to strict equality, you can also use assert almostequal() and assert not almostequal() to test the approximate equality of floating-point numbers.

# unittest_almostequal.py 

import unittest


class AlmostEqualTest(unittest.TestCase):

    def testEqual(self):
        self.assertEqual(1.1, 3.3 - 2.2)

    def testAlmostEqual(self):
        self.assertAlmostEqual(1.1, 3.3 - 2.2, places=1)

    def testNotAlmostEqual(self):
        self.assertNotAlmostEqual(1.1, 3.3 - 2.0, places=1)

The parameter is the value to compare and the number of decimal places to use for the test.

$ python3 -m unittest unittest_almostequal.py

.F.
================================================================
FAIL: testEqual (unittest_almostequal.AlmostEqualTest)
----------------------------------------------------------------
Traceback (most recent call last):
  File ".../unittest_almostequal.py", line 12, in testEqual
    self.assertEqual(1.1, 3.3 - 2.2)
AssertionError: 1.1 != 1.0999999999999996

----------------------------------------------------------------
Ran 3 tests in 0.001s

FAILED (failures=1)

container

In addition to the generic assertEqual() and assertNotEqual(), there are also ways to compare list, dict, and set objects.

# unittest_equality_container.py 

import textwrap
import unittest


class ContainerEqualityTest(unittest.TestCase):

    def testCount(self):
        self.assertCountEqual(
            [1, 2, 3, 2],
            [1, 3, 2, 3],
        )

    def testDict(self):
        self.assertDictEqual(
            {'a': 1, 'b': 2},
            {'a': 1, 'b': 3},
        )

    def testList(self):
        self.assertListEqual(
            [1, 2, 3],
            [1, 3, 2],
        )

    def testMultiLineString(self):
        self.assertMultiLineEqual(
            textwrap.dedent("""
            This string
            has more than one
            line.
            """),
            textwrap.dedent("""
            This string has
            more than two
            lines.
            """),
        )

    def testSequence(self):
        self.assertSequenceEqual(
            [1, 2, 3],
            [1, 3, 2],
        )

    def testSet(self):
        self.assertSetEqual(
            set([1, 2, 3]),
            set([1, 3, 2, 4]),
        )

    def testTuple(self):
        self.assertTupleEqual(
            (1, 'a'),
            (1, 'b'),
        )

Each method uses a format definition function that makes sense for the input type, making test failures easier to understand and correct.

$ python3 -m unittest unittest_equality_container.py

FFFFFFF
================================================================
FAIL: testCount
(unittest_equality_container.ContainerEqualityTest)
----------------------------------------------------------------
Traceback (most recent call last):
  File ".../unittest_equality_container.py", line 15, in
testCount
    [1, 3, 2, 3],
AssertionError: Element counts were not equal:
First has 2, Second has 1:  2
First has 1, Second has 2:  3

================================================================
FAIL: testDict
(unittest_equality_container.ContainerEqualityTest)
----------------------------------------------------------------
Traceback (most recent call last):
  File ".../unittest_equality_container.py", line 21, in
testDict
    {'a': 1, 'b': 3},
AssertionError: {'a': 1, 'b': 2} != {'a': 1, 'b': 3}
- {'a': 1, 'b': 2}
?               ^

+ {'a': 1, 'b': 3}
?               ^


================================================================
FAIL: testList
(unittest_equality_container.ContainerEqualityTest)
----------------------------------------------------------------
Traceback (most recent call last):
  File ".../unittest_equality_container.py", line 27, in
testList
    [1, 3, 2],
AssertionError: Lists differ: [1, 2, 3] != [1, 3, 2]

First differing element 1:
2
3

- [1, 2, 3]
+ [1, 3, 2]

================================================================
FAIL: testMultiLineString
(unittest_equality_container.ContainerEqualityTest)
----------------------------------------------------------------
Traceback (most recent call last):
  File ".../unittest_equality_container.py", line 41, in
testMultiLineString
    """),
AssertionError: '\nThis string\nhas more than one\nline.\n' !=
'\nThis string has\nmore than two\nlines.\n'

- This string
+ This string has
?            ++++
- has more than one
? ----           --
+ more than two
?           ++
- line.
+ lines.
?     +


================================================================
FAIL: testSequence
(unittest_equality_container.ContainerEqualityTest)
----------------------------------------------------------------
Traceback (most recent call last):
  File ".../unittest_equality_container.py", line 47, in
testSequence
    [1, 3, 2],
AssertionError: Sequences differ: [1, 2, 3] != [1, 3, 2]

First differing element 1:
2
3

- [1, 2, 3]
+ [1, 3, 2]

================================================================
FAIL: testSet
(unittest_equality_container.ContainerEqualityTest)
----------------------------------------------------------------
Traceback (most recent call last):
  File ".../unittest_equality_container.py", line 53, in testSet
    set([1, 3, 2, 4]),
AssertionError: Items in the second set but not the first:
4

================================================================
FAIL: testTuple
(unittest_equality_container.ContainerEqualityTest)
----------------------------------------------------------------
Traceback (most recent call last):
  File ".../unittest_equality_container.py", line 59, in
testTuple
    (1, 'b'),
AssertionError: Tuples differ: (1, 'a') != (1, 'b')

First differing element 1:
'a'
'b'

- (1, 'a')
?      ^

+ (1, 'b')
?      ^


----------------------------------------------------------------
Ran 7 tests in 0.005s

FAILED (failures=7)

Use assert () to test container relationships.

# unittest_in.py 

import unittest


class ContainerMembershipTest(unittest.TestCase):

    def testDict(self):
        self.assertIn(4, {1: 'a', 2: 'b', 3: 'c'})

    def testList(self):
        self.assertIn(4, [1, 2, 3])

    def testSet(self):
        self.assertIn(4, set([1, 2, 3]))

Any object supports the in operator or the container API assertIn().

$ python3 -m unittest unittest_in.py

FFF
================================================================
FAIL: testDict (unittest_in.ContainerMembershipTest)
----------------------------------------------------------------
Traceback (most recent call last):
  File ".../unittest_in.py", line 12, in testDict
    self.assertIn(4, {1: 'a', 2: 'b', 3: 'c'})
AssertionError: 4 not found in {1: 'a', 2: 'b', 3: 'c'}

================================================================
FAIL: testList (unittest_in.ContainerMembershipTest)
----------------------------------------------------------------
Traceback (most recent call last):
  File ".../unittest_in.py", line 15, in testList
    self.assertIn(4, [1, 2, 3])
AssertionError: 4 not found in [1, 2, 3]

================================================================
FAIL: testSet (unittest_in.ContainerMembershipTest)
----------------------------------------------------------------
Traceback (most recent call last):
  File ".../unittest_in.py", line 18, in testSet
    self.assertIn(4, set([1, 2, 3]))
AssertionError: 4 not found in {1, 2, 3}

----------------------------------------------------------------
Ran 3 tests in 0.001s

FAILED (failures=3)

Test exception

As mentioned earlier, if the test throws an exception, the AssertionError is treated as an error. This is useful for finding errors when modifying code with existing test coverage. However, in some cases, tests should verify that some code does generate exceptions.

For example, if you assign an invalid value to an object's property. In this case, assert rails () makes the code clearer than catching exceptions in the test. Compare the two tests:

# unittest_exception.py 

import unittest


def raises_error(*args, **kwds):
    raise ValueError('Invalid value: ' + str(args) + str(kwds))


class ExceptionTest(unittest.TestCase):

    def testTrapLocally(self):
        try:
            raises_error('a', b='c')
        except ValueError:
            pass
        else:
            self.fail('Did not see ValueError')

    def testAssertRaises(self):
        self.assertRaises(
            ValueError,
            raises_error,
            'a',
            b='c',
        )

The results are the same for both, but the second use of assert rails () is simpler.

$ python3 -m unittest -v unittest_exception.py

testAssertRaises (unittest_exception.ExceptionTest) ... ok
testTrapLocally (unittest_exception.ExceptionTest) ... ok

----------------------------------------------------------------
Ran 2 tests in 0.000s

OK

Repeat the test with different inputs

It is often useful to run the same test logic with different inputs. Instead of defining a separate test method for each small case, use a test method that contains multiple related assertion calls. The problem with this approach is that as long as one assertion fails, the rest will be skipped. A better solution is to use subTest() to create a context for the test in the test method. If the test fails, report the failure and proceed with the rest of the test.

# unittest_subtest.py 

import unittest


class SubTest(unittest.TestCase):

    def test_combined(self):
        self.assertRegex('abc', 'a')
        self.assertRegex('abc', 'B')
        # The next assertions are not verified!
        self.assertRegex('abc', 'c')
        self.assertRegex('abc', 'd')

    def test_with_subtest(self):
        for pat in ['a', 'B', 'c', 'd']:
            with self.subTest(pattern=pat):
                self.assertRegex('abc', pat)

In this example, the test UU combined() method never runs assertions' c 'and' d '. The test with subtest() method can correctly report other failures. Note that even if three failures are reported, the test runner still believes that there are only two test cases.

$ python3 -m unittest -v unittest_subtest.py

test_combined (unittest_subtest.SubTest) ... FAIL
test_with_subtest (unittest_subtest.SubTest) ...
================================================================
FAIL: test_combined (unittest_subtest.SubTest)
----------------------------------------------------------------
Traceback (most recent call last):
  File ".../unittest_subtest.py", line 13, in test_combined
    self.assertRegex('abc', 'B')
AssertionError: Regex didn't match: 'B' not found in 'abc'

================================================================
FAIL: test_with_subtest (unittest_subtest.SubTest) (pattern='B')
----------------------------------------------------------------
Traceback (most recent call last):
  File ".../unittest_subtest.py", line 21, in test_with_subtest
    self.assertRegex('abc', pat)
AssertionError: Regex didn't match: 'B' not found in 'abc'

================================================================
FAIL: test_with_subtest (unittest_subtest.SubTest) (pattern='d')
----------------------------------------------------------------
Traceback (most recent call last):
  File ".../unittest_subtest.py", line 21, in test_with_subtest
    self.assertRegex('abc', pat)
AssertionError: Regex didn't match: 'd' not found in 'abc'

----------------------------------------------------------------
Ran 2 tests in 0.001s

FAILED (failures=3)

Skip a case

What if we want to skip a case temporarily and not execute it? unittest also provides several methods:

skip decorator

...

class TestMathFunc(unittest.TestCase):
    """Test mathfuc.py"""

    ...

    @unittest.skip("I don't want to run this case.")
    def test_divide(self):
        """Test method divide(a, b)"""
        print "divide"
        self.assertEqual(2, divide(6, 3))
        self.assertEqual(2.5, divide(5, 2))

Implementation:

...
test_add (test_mathfunc.TestMathFunc)
Test method add(a, b) ... ok
test_divide (test_mathfunc.TestMathFunc)
Test method divide(a, b) ... skipped "I don't want to run this case."
test_minus (test_mathfunc.TestMathFunc)
Test method minus(a, b) ... ok
test_multi (test_mathfunc.TestMathFunc)
Test method multi(a, b) ... ok

----------------------------------------------------------------------
Ran 4 tests in 0.000s

OK (skipped=1)

You can see that the total number of test s is still 4, but the divide() method is skip ped.

The skip decorator has three unittest.skip(reason), unittest.skipIf(condition, reason), unittest.skipUnless(condition, reason), skip unconditionally, skip when condition is True, skip when condition is False.

TestCase.skipTest() method

...

class TestMathFunc(unittest.TestCase):
    """Test mathfuc.py"""

    ...

    def test_divide(self):
        """Test method divide(a, b)"""
        self.skipTest('Do not run this.')
        print "divide"
        self.assertEqual(2, divide(6, 3))
        self.assertEqual(2.5, divide(5, 2))

Output:

...
test_add (test_mathfunc.TestMathFunc)
Test method add(a, b) ... ok
test_divide (test_mathfunc.TestMathFunc)
Test method divide(a, b) ... skipped 'Do not run this.'
test_minus (test_mathfunc.TestMathFunc)
Test method minus(a, b) ... ok
test_multi (test_mathfunc.TestMathFunc)
Test method multi(a, b) ... ok

----------------------------------------------------------------------
Ran 4 tests in 0.001s

OK (skipped=1)

The effect is the same as the decorator above, skipping the divide method.

Ignore failed tests

You can use the expectedFailure() decorator to ignore failed tests.

# unittest_expectedfailure.py 

import unittest


class Test(unittest.TestCase):

    @unittest.expectedFailure
    def test_never_passes(self):
        self.assertTrue(False)

    @unittest.expectedFailure
    def test_always_passes(self):
        self.assertTrue(True)

If the expected failed test passes, the condition is treated as a special type of failure and reported as "unexpected success.".

$ python3 -m unittest -v unittest_expectedfailure.py

test_always_passes (unittest_expectedfailure.Test) ...
unexpected success
test_never_passes (unittest_expectedfailure.Test) ... expected
failure

----------------------------------------------------------------
Ran 2 tests in 0.001s

FAILED (expected failures=1, unexpected successes=1)

summary

  1. unittest is Python's own unit test framework, which we can use as the use case organization and execution framework of our automated test framework.
  2. Process of unittest: write TestCase, load TestCase from TestLoader to TestSuite, and then run TestSuite from TextTestRunner. The running results are saved in TextTestResult. When we execute through command line or unittest.main(), main will call run in TextTestRunner to execute, or we can directly execute through TextTestRunner Use cases.
  3. A class inherits unittest.TestCase, that is, a TestCase, in which the method starting with test is loaded as a real TestCase during load.
  4. The verbosity parameter controls the output of execution results. 0 is a simple report, 1 is a general report, and 2 is a detailed report.
  5. You can add case or suite to suite through addTest and addTests. You can use the loadTestsFrom method of TestLoader.
  6. Using setUp(), tearDown(), setUpClass(), and teardown class(), you can lay out the environment before use case execution and clean up the environment after use case execution.
  7. We can skip a case through skip, skipIf, skipUnless decorator, or use TestCase.skipTest method.
  8. Add stream to the parameter to output the report to the file: you can output the txt report with TextTestRunner, and you can output the html report with HTMLTestRunner.

Related documents:

https://pymotw.com/3/unittest/index.html

https://huilansame.github.io/huilansame.github.io/archivers/python-unittest

https://segmentfault.com/a/1190000016315201#articleHeader0

27 original articles published, 30 praised, 40000 visitors+
Private letter follow

Posted by forgun on Tue, 21 Jan 2020 02:50:39 -0800