Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make cache plugin cumulative #2621

Merged
merged 3 commits into from
Jul 28, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 11 additions & 15 deletions _pytest/cacheprovider.py
Original file line number Diff line number Diff line change
Expand Up @@ -105,27 +105,22 @@ def __init__(self, config):
self.config = config
active_keys = 'lf', 'failedfirst'
self.active = any(config.getvalue(key) for key in active_keys)
if self.active:
self.lastfailed = config.cache.get("cache/lastfailed", {})
else:
self.lastfailed = {}
self.lastfailed = config.cache.get("cache/lastfailed", {})

def pytest_report_header(self):
if self.active:
if not self.lastfailed:
mode = "run all (no recorded failures)"
else:
mode = "rerun last %d failures%s" % (
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed this number of failures because it might be incorrect (before and after this patch): we don't know which tests we will actually run because that's decided after collection only. Because of this I decided to remove the promise that we will run X failures at this point.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This problem is fixed by #2624

len(self.lastfailed),
mode = "rerun previous failures%s" % (
" first" if self.config.getvalue("failedfirst") else "")
return "run-last-failure: %s" % mode

def pytest_runtest_logreport(self, report):
if report.failed and "xfail" not in report.keywords:
if (report.when == 'call' and report.passed) or report.skipped:
self.lastfailed.pop(report.nodeid, None)
elif report.failed:
self.lastfailed[report.nodeid] = True
elif not report.failed:
if report.when == "call":
self.lastfailed.pop(report.nodeid, None)

def pytest_collectreport(self, report):
passed = report.outcome in ('passed', 'skipped')
Expand All @@ -147,11 +142,11 @@ def pytest_collection_modifyitems(self, session, config, items):
previously_failed.append(item)
else:
previously_passed.append(item)
if not previously_failed and previously_passed:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't quite understand why previously_passed was being considered here for this condition... to me it makes sense to skip deselecting items if we didn't find any previous failures in the current collection.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when we have failed tests that are outside of the of the selection thats currently being executed, that happens

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, what happens?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if test_a.py::test_a is failed and you run pytest test_b.py --lf then you shouldn't remove passed tests from the test set, that way you can run in lf mode and work subsets of the failure set until they pass

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, but unless I'm mistaken that is covered by the new test I added right?

I changed the condition to if not previously_failed: return, IOW don't skip anything if no collected item is part of the previous failures set.

So running pytest test_b.py --lf will only collect test_b.py::* tests, which means previously_failed will be empty and nothing will be skipped. At this point, if all tests from test_b.py pass, we don't lose the fact that test_a.py::test_a failed at some point in the past. If we execute pytest test_a.py --lf, now only test_a.py::test_a will execute, which is the point I'm trying to accomplish here with this PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds correct

if not previously_failed:
# running a subset of all tests with recorded failures outside
# of the set of tests currently executing
pass
elif self.config.getvalue("lf"):
return
if self.config.getvalue("lf"):
items[:] = previously_failed
config.hook.pytest_deselected(items=previously_passed)
else:
Expand All @@ -161,8 +156,9 @@ def pytest_sessionfinish(self, session):
config = self.config
if config.getvalue("cacheshow") or hasattr(config, "slaveinput"):
return
prev_failed = config.cache.get("cache/lastfailed", None) is not None
if (session.testscollected and prev_failed) or self.lastfailed:

saved_lastfailed = config.cache.get("cache/lastfailed", {})
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that we always have self.last_failed, write it back if it differs from the "default". Again I would appreciate a second set of eyes here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

last failed already supported correct updating if it was enabled, making it transient means the updating code has to change

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what you mean, could you clarify? Also not sure if it was just a general comment or a request for change.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

general comment, last failed in last failed mode supports working correctly when the test selection doesn't match the failure selection

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK thanks

if saved_lastfailed != self.lastfailed:
config.cache.set("cache/lastfailed", self.lastfailed)


Expand Down
2 changes: 2 additions & 0 deletions changelog/2621.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
``--last-failed`` now remembers forever when a test has failed and only forgets it if it passes again. This makes it
easy to fix a test suite by selectively running files and fixing tests incrementally.
99 changes: 99 additions & 0 deletions testing/test_cache.py
Original file line number Diff line number Diff line change
Expand Up @@ -437,3 +437,102 @@ def test_lastfailed_creates_cache_when_needed(self, testdir):
testdir.makepyfile(test_errored='def test_error():\n assert False')
testdir.runpytest('-q', '--lf')
assert os.path.exists('.cache')

def test_xfail_not_considered_failure(self, testdir):
testdir.makepyfile('''
import pytest
@pytest.mark.xfail
def test():
assert 0
''')
result = testdir.runpytest()
result.stdout.fnmatch_lines('*1 xfailed*')
assert self.get_cached_last_failed(testdir) == []

def test_xfail_strict_considered_failure(self, testdir):
testdir.makepyfile('''
import pytest
@pytest.mark.xfail(strict=True)
def test():
pass
''')
result = testdir.runpytest()
result.stdout.fnmatch_lines('*1 failed*')
assert self.get_cached_last_failed(testdir) == ['test_xfail_strict_considered_failure.py::test']

@pytest.mark.parametrize('mark', ['mark.xfail', 'mark.skip'])
def test_failed_changed_to_xfail_or_skip(self, testdir, mark):
testdir.makepyfile('''
import pytest
def test():
assert 0
''')
result = testdir.runpytest()
assert self.get_cached_last_failed(testdir) == ['test_failed_changed_to_xfail_or_skip.py::test']
assert result.ret == 1

testdir.makepyfile('''
import pytest
@pytest.{mark}
def test():
assert 0
'''.format(mark=mark))
result = testdir.runpytest()
assert result.ret == 0
assert self.get_cached_last_failed(testdir) == []
assert result.ret == 0

def get_cached_last_failed(self, testdir):
config = testdir.parseconfigure()
return sorted(config.cache.get("cache/lastfailed", {}))

def test_cache_cumulative(self, testdir):
"""
Test workflow where user fixes errors gradually file by file using --lf.
"""
# 1. initial run
test_bar = testdir.makepyfile(test_bar="""
def test_bar_1():
pass
def test_bar_2():
assert 0
""")
test_foo = testdir.makepyfile(test_foo="""
def test_foo_3():
pass
def test_foo_4():
assert 0
""")
testdir.runpytest()
assert self.get_cached_last_failed(testdir) == ['test_bar.py::test_bar_2', 'test_foo.py::test_foo_4']

# 2. fix test_bar_2, run only test_bar.py
testdir.makepyfile(test_bar="""
def test_bar_1():
pass
def test_bar_2():
pass
""")
result = testdir.runpytest(test_bar)
result.stdout.fnmatch_lines('*2 passed*')
# ensure cache does not forget that test_foo_4 failed once before
assert self.get_cached_last_failed(testdir) == ['test_foo.py::test_foo_4']

result = testdir.runpytest('--last-failed')
result.stdout.fnmatch_lines('*1 failed, 3 deselected*')
assert self.get_cached_last_failed(testdir) == ['test_foo.py::test_foo_4']

# 3. fix test_foo_4, run only test_foo.py
test_foo = testdir.makepyfile(test_foo="""
def test_foo_3():
pass
def test_foo_4():
pass
""")
result = testdir.runpytest(test_foo, '--last-failed')
result.stdout.fnmatch_lines('*1 passed, 1 deselected*')
assert self.get_cached_last_failed(testdir) == []

result = testdir.runpytest('--last-failed')
result.stdout.fnmatch_lines('*4 passed*')
assert self.get_cached_last_failed(testdir) == []