01 - Common Pytest commands
1.Run a single use case
pytest -v test_day01.py::test_simple_case1
Operation effect:
Administrator@PC-202102061358 MINGW64 /d/python_demo/pytest $ pytest -v test_day01.py::test_simple_case1 =========================================================================================== test session starts ============================================================================================ platform win32 -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- d:\python_demo\pytest\venv\scripts\python.exe cachedir: .pytest_cache rootdir: D:\python_demo\pytest collected 1 item test_day01.py::test_simple_case1 PASSED [100%] ============================================================================================ 1 passed in 0.03s =============================================================================================
Only test_simple_case1 under test_day01.py was run
":" Cannot contain spaces before and after, otherwise it cannot run
2.Help option
pytest --help || pytest -h
Operation effect:
$ pytest --help usage: pytest [options] [file_or_dir] [file_or_dir] [...] positional arguments: file_or_dir general: -k EXPRESSION only run tests which match the given substring expression. An expression is a python evaluatable expression where all names are substring-matched against test names and their parent classes. Example: -k 'test_method or test_other' matches all test functions and classes whose name contains 'test_method' or 'test_other', while -k 'not test_method' matches those that don't contain 'test_method' in their names. -k 'not test_method and not test_other' will eliminate the matches. Additionally keywords are matched to classes and functions containing extra names in their 'extra_keyword_matches' set, as well as functions which have names assigned directly to them. The matching is case-insensitive. -m MARKEXPR only run tests matching given mark expression. For example: -m 'mark1 and not mark2'. --markers show markers (builtin, plugin and per-project ones). -x, --exitfirst exit instantly on first error or failed test. --fixtures, --funcargs show available fixtures, sorted by plugin appearance (fixtures with leading '_' are only shown with '-v') --fixtures-per-test show fixtures per test --pdb start the interactive Python debugger on errors or KeyboardInterrupt. --pdbcls=modulename:classname start a custom interactive Python debugger on errors. For example: --pdbcls=IPython.terminal.debugger:TerminalPdb --trace Immediately break when running each test. --capture=method per-test capturing method: one of fd|sys|no|tee-sys. -s shortcut for --capture=no. --runxfail report the results of xfail tests as if they were not marked --lf, --last-failed rerun only the tests that failed at the last run (or all if none failed) --ff, --failed-first run all tests, but run the last failures first. This may re-order tests and thus lead to repeated fixture setup/teardown. --nf, --new-first run tests from new files first, then the rest of the tests sorted by file mtime --cache-show=[CACHESHOW] show cache contents, don't perform collection or tests. Optional argument: glob (default: '*'). --cache-clear remove all cache contents at start of test run. --lfnf={all,none}, --last-failed-no-failures={all,none} which tests to run with no previously (known) failures. --sw, --stepwise exit on test failure and continue from last failing test next time --sw-skip, --stepwise-skip ignore the first failing test but stop on the next failing test ...ellipsis more command lines ...
general contains most of the commonly used commands, but you can refer to this article for additional commands:
pytest 132 command line parameter usage. TesterHome
3. -k parameter
The -k parameter allows testers to use expressions to filter test cases they want to run
# Use cases with case1 or case2 in the run name pytest -k "case1 or case2" # Use cases with both test and login in the run name pytest -k "test and login" #Use cases without logout in the run name pytest -k "not logout"
4. -m parameter
The -m parameter can run one or more marked use cases
For example, add the @pytest.mark.setup_cases decorator to test_upload(), test_detail()
import pytest ... @pytest.mark.setup_cases test_upload(): ... @pytest.mark.setup_cases test_detail() ...
Function:
pytest -m "setup_cases"
Entering this command will execute the above two cases. Of course, the -m parameter can also specify multiple tag names, which can be filtered by "and", "or", "not", in the same way as the -k parameter.
5. -x parameter
The -x parameter is also known as "exitfirst", pytest theoretically executes all the search cases, and when a use case fails, it records the test results and continues to execute. However, sometimes we want pytest to stop when it encounters an error, so we can use "exitfirst".
pytest -x
When a failure occurs, the command line displays:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
6. -maxfail=num parameter
The -x parameter stops whenever an error occurs. The -maxfail parameter specifies the maximum number of failures.
pytest -maxfail=5
'='cannot contain spaces before or after, otherwise it cannot run
7. -s || —capture=no
Either -s or -capture=no turns off output capture, and the print will not be displayed when we debug.
pytest -s # or pytest --capture=no
8. -lf(-last-failed) and -ff(-failed-first)
After a test run, we can rerun the last failed use case set with -lf.
# firstly run the test suite pytest # then use --lf run the failed cases pytest --lf
The difference between -ff and -lf is that -lf only runs the last failed use case; -ff runs the failed use case first, then executes the remaining successful use cases
# firstly run the test suite pytest # use --ff run the failed ones first, and then the succecss ones pytest --ff
Comparison of -lf and -ff results:
# --lf collected 3 items / 2 deselected / 1 selected run-last-failure: rerun previous 1 failure test_day01.py F # ==================================================== # --ff collected 3 items run-last-failure: rerun previous 1 failure first test_day01.py F..
9. -v || - verbose parameter
-v/-verbose displays the results in more detail.
... # diffrent content with -v test_day01.py::test_simple_case PASSED [ 33%] test_day01.py::test_simple_case1 FAILED [ 66%] test_day01.py::test_print_sys PASSED [100%]
10. -q || - quiet parameter
In contrast to -v, -q simplifies the output information.
11. -l || —showlocals
The -l parameter outputs the value of the local variable in the failed use case
... ================================================================================================= FAILURES ================================================================================================= ____________________________________________________________________________________________ test_simple_case1 _____________________________________________________________________________________________ def test_simple_case1(): dic = {"key1": "123", "key2": "456"} > assert dic == {"key1": "1231", "key2": "456"} E AssertionError: assert {'key1': '123', 'key2': '456'} == {'key1': '123...'key2': '456'} E Omitting 1 identical items, use -vv to show E Differing items: E {'key1': '123'} != {'key1': '1231'} E Use -v to get the full diff dic = {'key1': '123', 'key2': '456'}
The output contains the value of the dic variable
12. -tb=style parameter
- -tb=no, block the output of all error messages
- -tb=line, show only the wrong location
- -tb=short, only one line of assert and system decision output
- -tb=long, error message most detailed
- - tb=auto(default), if multiple use cases fail, print only the first and last error messages in the same format as long
- -tb=native, showing only Python standard library error messages
13. -durations=N parameter
Used to count N use cases with the longest runtime
$ pytest request_test.py --durations=100000
Operation effect:
$ pytest request_test.py --durations=100000 =========================================================================================== test session starts ============================================================================================ platform win32 -- Python 3.8.10, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: D:\python_demo\pytest collected 3 items request_test.py ... [100%] ========================================================================================= slowest 100000 durations ========================================================================================= 0.25s call request_test.py::test_upload 0.24s call request_test.py::test_detail 0.23s call request_test.py::test_clean (6 durations < 0.005s hidden. Use -vv to show these durations.) ============================================================================================ 3 passed in 0.80s ============================================================================================```