Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP Change outcome to 'passed' for xfail unless it's strict #1795

Merged
merged 17 commits into from
Aug 18, 2016
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 9 additions & 3 deletions _pytest/skipping.py
Original file line number Diff line number Diff line change
Expand Up @@ -230,7 +230,8 @@ def pytest_runtest_makereport(item, call):
if hasattr(item, '_unexpectedsuccess') and rep.when == "call":
# we need to translate into how pytest encodes xpass
rep.wasxfail = "reason: " + repr(item._unexpectedsuccess)
rep.outcome = "failed"
# TODO: Do we need to check for strict xfail here as well?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you are right, this has to be:

rep.outcome = "failed" if is_strict_xfail else "passed"

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest creating a helper function:

def is_strict_xfail(item, marker):
    strict_default = item.config.getini('xfail_strict')
    return evalxfail.get('strict', strict_default)

And reusing in both places.

I also noticed you can reuse this same function in check_strict_xfail in the same module.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh OK, I looked some moreand noticed that in this case it is just to cover unittest specifically, which doesn't have a strict option:

import unittest

class T(unittest.TestCase):

    @unittest.expectedFailure 
    def test_foo(self):
        assert 0

    @unittest.expectedFailure 
    def test_pass(self):
        assert 1

if __name__ == '__main__':
    unittest.main()        

Running it:

$ python foo.py
xu
----------------------------------------------------------------------
Ran 2 tests in 0.000s

FAILED (expected failures=1, unexpected successes=1)

So I think the original code was the correct one.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another thing: I noticed that no tests from unittest caught this regression, we should add two new test cases for it:

This test suite should pass:

import unittest
class T(unittest.TestCase):

    @unittest.expectedFailure 
    def test_xfail(self):
        assert 0

This one should fail because @expectedFailure works like xfail(strict=True):

import unittest
class T(unittest.TestCase):

    @unittest.expectedFailure 
    def test_unexpected_pass(self):
        pass

Additional idea: we could probably parametrize this and run it using py.test and unittest.main to ensure we have the same outcome in both cases.

rep.outcome = "passed"
elif item.config.option.runxfail:
pass # don't interefere
elif call.excinfo and call.excinfo.errisinstance(pytest.xfail.Exception):
Expand All @@ -245,7 +246,12 @@ def pytest_runtest_makereport(item, call):
rep.outcome = "skipped"
rep.wasxfail = evalxfail.getexplanation()
elif call.when == "call":
rep.outcome = "failed" # xpass outcome
strict_default = item.config.getini('xfail_strict')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm I think this is the more correct implementation of "strict xfail" than the one I did... with this, I think we don't even need check_strict_xfail (which is used in pytest_pyfunc_call) in this module anymore at all, because failure/passing will be handled here which seems more appropriate now. 😄

is_strict_xfail = evalxfail.get('strict', strict_default)
if is_strict_xfail:
rep.outcome = "failed"
else:
rep.outcome = "passed"
rep.wasxfail = evalxfail.getexplanation()
elif evalskip is not None and rep.skipped and type(rep.longrepr) is tuple:
# skipped by mark.skipif; change the location of the failure
Expand All @@ -260,7 +266,7 @@ def pytest_report_teststatus(report):
if hasattr(report, "wasxfail"):
if report.skipped:
return "xfailed", "x", "xfail"
elif report.failed:
elif report.passed:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fabioz I think this will mean you no longer have to do special handling for this in pydev_runfiles_pytest2.py, correct?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nicoddemus do you think we need to check for strict_xfail here too?

Copy link
Member

@nicoddemus nicoddemus Aug 15, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need: strict xfails now have report set to failed and will appear as a normal failure.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay, that's what I thought. Thanks!

return "xpassed", "X", ("XPASS", {'yellow': True})

# called by the terminalreporter instance/plugin
Expand Down