Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FBcode->GH] Allow all torchvision test rules to run with RE #4073

Merged
merged 2 commits into from
Jun 16, 2021
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 25 additions & 2 deletions test/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,19 @@ def pytest_configure(config):


def pytest_collection_modifyitems(items):
# This hook is called by pytest after it has collected the tests (google its name!)
# This hook is called by pytest after it has collected the tests (google its name to check out its doc!)
# We can ignore some tests as we see fit here, or add marks, such as a skip mark.
#
# Typically here, we try to optimize CI time. In particular, the GPU CI instances don't need to run the
# tests that don't need CUDA, because those tests are extensively tested in the CPU CI instances already.
# This is true for both CircleCI and the fbcode internal CI.
# In the fbcode CI, we have an additional constraint: we try to avoid skipping tests. So instead of relying on
# pytest.mark.skip, in fbcode we literally just remove those tests from the `items` list, and it's as if
# these tests never existed.

out_items = []
for item in items:
# The needs_cuda mark will exist if the test was explicitely decorated with
# The needs_cuda mark will exist if the test was explicitly decorated with
# the @needs_cuda decorator. It will also exist if it was parametrized with a
# parameter that has the mark: for example if a test is parametrized with
# @pytest.mark.parametrize('device', cpu_and_gpu())
Expand Down Expand Up @@ -57,3 +64,19 @@ def pytest_collection_modifyitems(items):
out_items.append(item)

items[:] = out_items


def pytest_sessionfinish(session, exitstatus):
# This hook is called after all tests have run, and just before returning an exit status.
# We here change exit code 5 into 0.
#
# 5 is issued when no tests were actually run, e.g. if you use `pytest -k some_regex_that_is_never_matched`.
#
# Having no test being run for a given test rule is a common scenario in fbcode, and typically happens on
# the GPU test machines which don't run the CPU-only tests (see pytest_collection_modifyitems above). For
# example `test_transforms.py` doesn't contain any CUDA test at the time of
# writing, so on a GPU test machine, testpilot would invoke pytest on this file and no test would be run.
# This would result in pytest returning 5, causing testpilot to raise an error.
# To avoid this, we transform this 5 into a 0 to make testpilot happy.
if exitstatus == 5:
session.exitstatus = 0