Catalog
-
1. cacheprovider plug-in
- 1.1. --lf, --last-failed: Execute only the last failed use case
- 1.2. --ff, --failed-first: execute the last failed use case before the others
- 1.3. --nf, --new-first: execute new or modified use cases before others
- 1.4. --cache-clear: Clear all caches before executing use cases
- 1.5. If there were no failed use cases in the previous round
- 2. Cong.cache object
- 3. Stepwise
Futures Index: https://www.cnblogs.com/luizyao/p/11771740.html
pytest writes the execution state of this round of tests to the.pytest_cache folder, which is implemented by the cacheprovider plug-in that comes with it.
Be careful:
By default, pytest writes the state of test execution to the.pytest_cache folder in the root directory. We can also customize the cache_dir option in pytest.ini, which can be either a relative path or an absolute path.
Relative paths refer to the directory relative to the pytest.ini file; for example, we put the cache and source code for this chapter together:
Add the following configuration in src/chapter-12/pytest.ini:
[pytest] cache_dir = .pytest-cache
This way, even if we execute the use cases in src/chapter-12/at the root of the project, the cache will only be generated in pytest-chinese-doc/src/chapter-12/.pytest_cache, not pytest-chinese-doc/.pytest_cache;
pytest-chinese-doc (5.1.3) λ pipenv run pytest src/chapter-12
1. cacheprovider plug-in
Before introducing this plug-in, let's take a look at a simple example:
# src/chapter-12/test_failed.py import pytest @pytest.mark.parametrize('num', [1, 2]) def test_failed(num): assert num == 1 # src\chapter-12\test_pass.py def test_pass(): assert 1
We have two simple test modules, so first let's execute them:
λ pipenv run pytest -q src/chapter-12/ .F. [100%] =============================== FAILURES ================================ ____________________________ test_failed[2] _____________________________ num = 2 @pytest.mark.parametrize('num', [1, 2]) def test_failed(num): > assert num == 1 E assert 2 == 1 src\chapter-12\test_failed.py:27: AssertionError 1 failed, 2 passed in 0.08s
You can see that a total of three test cases have been collected, one failed, the other two succeeded, and the two successfully executed cases belong to different test modules.
At the same time, pytest also generates a cache folder (.pytest_cache) in the src/chapter-12/directory with the following directory structure:
src ├───chapter-12 │ │ pytest.ini # Cache_dir =.Pytest-cache configured │ │ test_failed.py │ │ test_pass.py │ │ │ └───.pytest-cache │ │ .gitignore │ │ CACHEDIR.TAG │ │ README.md │ │ │ └───v │ └───cache │ lastfailed │ nodeids │ stepwise
Now let's take a look at the functionality of the cacheprovider plug-in in conjunction with the above organization.
1.1. --lf, --last-failed: Execute only the last failed use case
The last failed file in the cache records the last failed use case ID, and we can view its contents with the--cache-show command:
--cache-show command is also a new function provided by cacheprovider, it will not cause any use case to execute;
λ pipenv run pytest src/chapter-12/ -q --cache-show 'lastfailed' cachedir: D:\Personal Files\Projects\pytest-chinese-doc\src\chapter-12\.pytest-cache --------------------- cache values for 'lastfailed' --------------------- cache\lastfailed contains: {'test_failed.py::test_failed[2]': True} no tests ran in 0.01s
We can see that it records a use case for the ID of the last failed test case: test_failed.py::test_failed[2];
The next time we execute, when we use the --lf option, pytest will only select this failed use case during the collection phase, ignoring the others:
λ pipenv run pytest --lf --collect-only src/chapter-12/ ========================== test session starts ========================== platform win32 -- Python 3.7.3, pytest-5.1.3, py-1.8.0, pluggy-0.13.0 cachedir: .pytest-cache rootdir: D:\Personal Files\Projects\pytest-chinese-doc\src\chapter-12, inifile: pytest.ini collected 2 items / 1 deselected / 1 selected <Module test_failed.py> <Function test_failed[2]> run-last-failure: rerun previous 1 failure (skipped 2 files) ========================= 1 deselected in 0.02s =========================
Let's take a closer look at the echo above, and one sentence may confuse us a little: collected 2 items / 1 deselected / 1 selected, but we know there are three cases, how can we only collect two?
In fact, --lf overrides two hook methods in the use case collection phase: pytest_ignore_collect(path, config) and pytest_collection_modifyitems(session, config, items);
Let's first look at pytest_ignore_collect(path, config) and ignore the use case in the path path if its result returns True.
# _pytest/cacheprovider.py def last_failed_paths(self): """Returns a set with all Paths()s of the previously failed nodeids (cached). """ try: return self._last_failed_paths except AttributeError: rootpath = Path(self.config.rootdir) result = {rootpath / nodeid.split("::")[0] for nodeid in self.lastfailed} result = {x for x in result if x.exists()} self._last_failed_paths = result return result def pytest_ignore_collect(self, path): """ Ignore this file path if we are in --lf mode and it is not in the list of previously failed files. """ if self.active and self.config.getoption("lf") and path.isfile(): last_failed_paths = self.last_failed_paths() if last_failed_paths: skip_it = Path(path) not in self.last_failed_paths() if skip_it: self._skipped_files += 1 return skip_it
You can see that if the currently collected file is not in the last failed path collection, it will be ignored, so this execution will not collect use cases in test_pass.py, so only two use cases will be collected; and py test.ini is also on the ignored list, so in fact two files will be skipped: (skipped 2 files);
As for the pytest_collection_modifyitems(session, config, items) hook method, we'll look at it in the next section along with the--ff command.
1.2. --ff, --failed-first: execute the last failed use case before the others
Let's see the effect of this command through practice before we analyze its implementation:
λ pipenv run pytest --collect-only -s --ff src/chapter-12/ ========================== test session starts ========================== platform win32 -- Python 3.7.3, pytest-5.1.3, py-1.8.0, pluggy-0.13.0 cachedir: .pytest-cache rootdir: D:\Personal Files\Projects\pytest-chinese-doc\src\chapter-12, inifile: pytest.ini collected 3 items <Module test_failed.py> <Function test_failed[2]> <Function test_failed[1]> <Module test_pass.py> <Function test_pass> run-last-failure: rerun previous 1 failure first ========================= no tests ran in 0.02s =========================
We can see that a total of three test cases have been collected, and the previous round of failed test_failed.py::test_failed[2] use cases are at the top and will take precedence over the normal collection order;
In fact, -ff just overrides the hook method: pytest_collection_modifyitems(session, config, items), which can filter or reorder collected use cases:
# _pytest/cacheprovider.py def pytest_collection_modifyitems(self, session, config, items): ... if self.config.getoption("lf"): items[:] = previously_failed config.hook.pytest_deselected(items=previously_passed) else: # --failedfirst items[:] = previously_failed + previously_passed ...
You can see that if lf is used, the previous successful use case states are set to deselected, which is ignored by this round of execution; if -ff is used, only the previously failed use cases are set to the front in order;
In addition, we can see that lf has a higher priority than ff, so if they are used together, FF does not work.
1.3. --nf, --new-first: execute new or modified use cases before others
The nodeids file in the cache records all use cases performed in the previous round:
λ pipenv run pytest src/chapter-12 --cache-show 'nodeids' ========================== test session starts ========================== platform win32 -- Python 3.7.3, pytest-5.1.3, py-1.8.0, pluggy-0.13.0 cachedir: .pytest-cache rootdir: D:\Personal Files\Projects\pytest-chinese-doc\src\chapter-12, inifile: pytest.ini cachedir: D:\Personal Files\Projects\pytest-chinese-doc\src\chapter-12\.pytest-cache ---------------------- cache values for 'nodeids' ----------------------- cache\nodeids contains: ['test_failed.py::test_failed[1]', 'test_failed.py::test_failed[2]', 'test_pass.py::test_pass'] ========================= no tests ran in 0.01s =========================
We see three test cases executed in the last round;
Now let's add a new use case to test_pass.py and modify the use case in the test_failed.py file (but don't add a new use case):
# src\chapter-12\test_pass.py def test_pass(): assert 1 def test_new_pass(): assert 1
Now let's execute the collection command again:
λ pipenv run pytest --collect-only -s --nf src/chapter-12/ ========================== test session starts ========================== platform win32 -- Python 3.7.3, pytest-5.1.3, py-1.8.0, pluggy-0.13.0 cachedir: .pytest-cache rootdir: D:\Personal Files\Projects\pytest-chinese-doc\src\chapter-12, inifile: pytest.ini collected 4 items <Module test_pass.py> <Function test_new_pass> <Module test_failed.py> <Function test_failed[1]> <Function test_failed[2]> <Module test_pass.py> <Function test_pass> ========================= no tests ran in 0.03s =========================
You can see that the order of new use cases is first, followed by modified test cases, followed by old ones; this behavior is reflected in the source code:
# _pytest/cacheprovider.py def pytest_collection_modifyitems(self, session, config, items): if self.active: new_items = OrderedDict() other_items = OrderedDict() for item in items: if item.nodeid not in self.cached_nodeids: new_items[item.nodeid] = item else: other_items[item.nodeid] = item items[:] = self._get_increasing_order( new_items.values() ) + self._get_increasing_order(other_items.values()) self.cached_nodeids = [x.nodeid for x in items if isinstance(x, pytest.Item)] def _get_increasing_order(self, items): return sorted(items, key=lambda item: item.fspath.mtime(), reverse=True)
item.fspath.mtime() represents the last modification time of the file in which the use case is located, and reverse=True indicates that it is in reverse order;
items[:] = self._get_increasing_order(new_items.values()) + self._get_increasing_order(other_items.values()) guarantees that the newly added use cases always come first;
1.4. --cache-clear: Clear all caches before executing use cases
Look directly at the source code:
# _pytest/cacheprovider.py class Cache: ... @classmethod def for_config(cls, config): cachedir = cls.cache_dir_from_config(config) if config.getoption("cacheclear") and cachedir.exists(): rm_rf(cachedir) cachedir.mkdir() return cls(cachedir, config)
You can see that it will delete the existing cache folder (rm_rf(cachedir)) and create an empty folder with the same name (cachedir.mkdir()), which will invalidate the above functions, so this command is generally not used;
1.5. If there were no failed use cases in the previous round
Now let's clear the cache and execute the test_pass.py module (whose use cases are all successful):
λ pipenv run pytest --cache-clear -q -s src/chapter-12/test_pass.py . 1 passed in 0.01s
Now let's take a look at the cache directory:
.pytest-cache └───v └───cache nodeids stepwise
Is there anything missing?Yes!Since there are no failed use cases, the lastfailed file will not be generated, so what happens when you use --lf and--ff at this time?Let's try:
Be careful:
If we look carefully enough, we will find that there are not only fewer lastfailed files in the cache directory now than before, but also fewer CACHEDIR.TAG,.gitignore, and README.md files.
This is a bug that I have submitted an issue on pytest version 5.3.1 and expect to be fixed in a later version. If you are interested in learning more about its causes and solutions, you can refer to this: https://github.com/pytest-dev/pytest/issues/6290
luyao@NJ-LUYAO-T460 /d/Personal Files/Projects/pytest-chinese-doc (5.1.3) λ pipenv run pytest -q -s --lf src/chapter-12/test_pass.py . 1 passed in 0.01s luyao@NJ-LUYAO-T460 /d/Personal Files/Projects/pytest-chinese-doc (5.1.3) λ pipenv run pytest -q -s --ff src/chapter-12/test_pass.py . 1 passed in 0.02s
You can see that they have no effect; why is this so?Let's go to the source and find the answer.
# _pytest/cacheprovider.py class LFPlugin: """ Plugin which implements the --lf (run last-failing) option """ def __init__(self, config): ... self.lastfailed = config.cache.get("cache/lastfailed", {}) ... def pytest_collection_modifyitems(self, session, config, items): ... if self.lastfailed: ... else: self._report_status = "no previously failed tests, " if self.config.getoption("last_failed_no_failures") == "none": self._report_status += "deselecting all items." config.hook.pytest_deselected(items=items) items[:] = [] else: self._report_status += "not deselecting items."
You can see that when self.last failed fails, if we specify the last_failed_no_failures option as none, pytest ignores all use cases (items[:] = []), otherwise it does not make any modifications (as with -lf or -ff), and self.last failed is judged on the basis of the last failed file;
Keep looking, we'll learn a new command line option:
# _pytest/cacheprovider.py group.addoption( "--lfnf", "--last-failed-no-failures", action="store", dest="last_failed_no_failures", choices=("all", "none"), default="all", help="which tests to run with no previously (known) failures.", )
Give it a try:
λ pipenv run pytest -q -s --ff --lfnf none src/chapter-12/test_pass.py 1 deselected in 0.01s λ pipenv run pytest -q -s --ff --lfnf all src/chapter-12/test_pass.py . 1 passed in 0.01s
Be careful:
--The argument to lfnf only supports choices=("all", "none");
2. Cong.cache object
We can access and set the data in the cache through pytest's config object; here is a simple example:
# content of test_caching.py import pytest import time def expensive_computation(): print("running expensive computation...") @pytest.fixture def mydata(request): val = request.config.cache.get("example/value", None) if val is None: expensive_computation() val = 42 request.config.cache.set("example/value", val) return val def test_function(mydata): assert mydata == 23
Let's execute this test case once:
λ pipenv run pytest -q src/chapter-12/test_caching.py F [100%] ================================ FAILURES ================================= ______________________________ test_function ______________________________ mydata = 42 def test_function(mydata): > assert mydata == 23 E assert 42 == 23 src/chapter-12/test_caching.py:43: AssertionError -------------------------- Captured stdout setup -------------------------- running expensive computation... 1 failed in 0.05s
At this time, there is no example/value in the cache, the value of value is written to the cache, and the terminal prints running expensive computation...;
View the cache with a new file added:.pytest-cache/v/example/value;
.pytest-cache/ ├── .gitignore ├── CACHEDIR.TAG ├── README.md └── v ├── cache │ ├── lastfailed │ ├── nodeids │ └── stepwise └── example └── value 3 directories, 7 files
Looking through the --cache-show option, you can see that its content is exactly 42:
λ pipenv run pytest src/chapter-12/ -q --cache-show 'example/value' cachedir: /Users/yaomeng/Private/Projects/pytest-chinese-doc/src/chapter-12/.pytest-cache -------------------- cache values for 'example/value' --------------------- example/value contains: 42 no tests ran in 0.00s
Execute this use case again, when the cache already has the data we need, and the terminal will no longer print running expensive calculations...:
λ pipenv run pytest -q src/chapter-12/test_caching.py F [100%] ================================ FAILURES ================================= ______________________________ test_function ______________________________ mydata = 42 def test_function(mydata): > assert mydata == 23 E assert 42 == 23 src/chapter-12/test_caching.py:43: AssertionError 1 failed in 0.04s
3. Stepwise
Imagine a scenario where we want to exit execution when we encounter the first failed use case and start execution next time from that use case.
The following test module is an example:
# src/chapter-12/test_sample.py def test_one(): assert 1 def test_two(): assert 0 def test_three(): assert 1 def test_four(): assert 0 def test_five(): assert 1
Let's start with a test: pipenv run py test --cache-clear --sw src/chapter-12/test_sample.py;
λ pipenv run pytest --cache-clear --sw -q src/chapter-12/test_sample.py .F ================================= FAILURES ================================= _________________________________ test_two _________________________________ def test_two(): > assert 0 E assert 0 src/chapter-12/test_sample.py:28: AssertionError !!!!!! Interrupted: Test failed, continuing from this test next run. !!!!!!! 1 failed, 1 passed in 0.13s
Use --cache-clear to clear the previous cache, use --sw, --stepwise to exit execution on the first failed use case;
Now in our cache file lastfailed records the case of this execution failure, which is test_two(); nodeids records all test cases; in particular, stepwise records the last failed test case, which is also test_two();
Next, we execute it again in the way--sw: pytest first reads the value in stepwise and executes it as the first use case;
λ pipenv run pytest --sw -q src/chapter-12/test_sample.py F ================================= FAILURES ================================= _________________________________ test_two _________________________________ def test_two(): > assert 0 E assert 0 src/chapter-12/test_sample.py:28: AssertionError !!!!!! Interrupted: Test failed, continuing from this test next run. !!!!!!! 1 failed, 1 deselected in 0.12s
You can see that test_two() starts execution as the first use case and exits at the first failure;
In fact, pytest also provides a command line option--stepwise-skip, which ignores the first failure case and exits execution at the second failure; let's try:
λ pipenv run pytest --sw --stepwise-skip -q src/chapter-12/test_sample.py F.F =============================== FAILURES ================================ _______________________________ test_two ________________________________ def test_two(): > assert 0 E assert 0 src\chapter-12\test_sample.py:28: AssertionError _______________________________ test_four _______________________________ def test_four(): > assert 0 E assert 0 src\chapter-12\test_sample.py:36: AssertionError !!!!! Interrupted: Test failed, continuing from this test next run. !!!!! 2 failed, 1 passed, 1 deselected in 0.16s
At this point, execution exits at the second failed use case, test_four(), and the value of the stepwise file is changed to "test_sample.py::test_four";
In fact, all the contents of this chapter can be reflected in the source _pytest/cacheprovider.py file, if you can combine source learning, you will get twice the result with half the effort;