Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fail_under setting with precision is not working #403

Open
jonahkagan opened this issue Apr 24, 2020 · 7 comments
Open

fail_under setting with precision is not working #403

jonahkagan opened this issue Apr 24, 2020 · 7 comments

Comments

@jonahkagan
Copy link

Summary

I have report: precision set to 2 and fail_under set to 97.47, and my test coverage total is reading as 97.47, but I'm getting a failure message and failure code (exit code 2).

Expected vs actual result

Expected: test coverage passes
Actual: FAIL Required test coverage of 97.47% not reached. Total coverage: 97.47%

I even tried modifying fail_under to 97.469, in which case I got this even more nonsensical message:

FAIL Required test coverage of 97.469% not reached. Total coverage: 97.47%

Reproducer

Versions

Output of relevant packages pip list, python --version, pytest --version etc.

Make sure you include complete output of tox if you use it (it will show versions of various things).

Python 3.7.5
pipenv, version 2018.11.26
pytest version 5.4.1
pytest-cov 2.8.1

Config

Include your tox.ini, pytest.ini, .coveragerc, setup.cfg or any relevant configuration.

# .coveragerc
[report]
fail_under = 97.47
precision = 2
skip_covered = true
show_missing = true

Code

Link to your repository, gist, pastebin or just paste raw code that illustrates the issue.

If you paste raw code make sure you quote it, eg:

votingworks/arlo@89c50e4

@nedbat
Copy link
Collaborator

nedbat commented Apr 24, 2020

Thanks. Can you provide details about how you configure your virtualenv and run your tests?

@jonahkagan
Copy link
Author

Thanks for the quick response!

Sure, I think most of what you're looking for is here: https://github.com/votingworks/arlo/blob/89c50e43216963f06af6e4c5104b67fd33e4ff36/Makefile.

Here are the relevant bits for running tests/coverage:

PIPENV=python3.7 -m pipenv

test-server:
	FLASK_ENV=test ${PIPENV} run python -m pytest ${FILE} \
		-k '${TEST}' --ignore=arlo-client -vv ${FLAGS}

test-server-coverage:
	FLAGS='--cov=. ${FLAGS}' make test-server

I don't know exactly what to tell you about the virutalenv. I didn't set up the repo and don't quite understand how it all works to be honest.

@jonahkagan
Copy link
Author

Unrelated to this bug, but I also realized as I've been working with the test coverage more that it would be more useful to me to be able to set a threshold for the actual number of missed lines, instead of a percentage.

I am introducing test coverage to a repo that didn't have it before, so I'm trying to lock in the coverage at it's current state so I don't regress (until I have time to invest in covering all the remaining bits). The problem with using a percentage is that whenever I write new code, it changes the percentage. Even if all the new code is covered, the percentage increases... So I'll have to update the fail_under threshold with each PR.

If I could lock in the actual number of uncovered lines, then it would be a much more useful baseline to compare to when I add new code.

Wondering if you have thoughts on this. If useful, I could open up a new issue to discuss.

@benji-york
Copy link

I can reproduce this issue:
Screenshot 2020-06-02 07 33 04

@dogmatic69
Copy link

I also experience this issue:
requirements.txt

	docker run --rm \
		-v ${TEST_PATH}:/tests \
		-v ${SRC_PATH}:/src \
		--entrypoint pytest \
		-e PYTHONPATH=/src:/tests \
		${DOCKER_IMAGE}  -W ignore::DeprecationWarning -v -x \
			--cov-report html:/tests/coverage \
			--cov=/src \
			--cov-branch \
			--cov-fail-under=35.7 \
			/tests

----------- coverage: platform linux, python 3.8.2-final-0 -----------
Coverage HTML written to dir /tests/coverage

FAIL Required test coverage of 35.7% not reached. Total coverage: 35.70%

============================== 63 passed in 1.37s ==============================

Playing with the numbers:

FAIL Required test coverage of 35.699% not reached. Total coverage: 35.70%
Required test coverage of 35.69% reached. Total coverage: 35.70%

@nedbat
Copy link
Collaborator

nedbat commented Dec 13, 2020

If you are seeing this issue, can you increase the reporting precision to see what the actual coverage value is? For example, if the total coverage is 93.18757, it will be reported to two decimal places as 93.19, but the actual value is less than 93.189.

@didibz-harmonya
Copy link

This PR fixes this reporting issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants