Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

per file coverage threshold #691

Open
graingert opened this issue Aug 8, 2018 · 26 comments
Open

per file coverage threshold #691

graingert opened this issue Aug 8, 2018 · 26 comments
Labels
enhancement New feature or request target

Comments

@graingert
Copy link
Contributor

graingert commented Aug 8, 2018

use cases:

  • fail on unexecuted files
  • starting to do unit tests in a big project, overall coverage is low - but we can increase per file coverage more easily
@nedbat
Copy link
Owner

nedbat commented Aug 8, 2018

Can you say more about what you want this to do? What would the user experience be?

@graingert
Copy link
Contributor Author

--fail-under-file 0 would return exit code 2 for any unexecuted files

@graingert
Copy link
Contributor Author

--fail-under-file 10 would return exit code 2 for any files with individual coverage less than 10%

@nedbat
Copy link
Owner

nedbat commented Aug 9, 2018

Thanks, that makes it clear.

I'm not sure how this would help for your second case ("starting to do unit tests in a big project"): coverage would fail for a very long time, until you managed to get at least 10% (or whatever) coverage in every single file. That seems like it would be discouraging, and push you toward the wrong metric.

@nedbat nedbat added the enhancement New feature or request label Oct 9, 2018
@nedbat nedbat added the question Further information is requested label Oct 21, 2018
@cbernecker
Copy link

Is there any ongoing action to implement --fail-under-file 10 ? I guess this would be a big benefit for the most projects. Because with that feature you can check which developer doesn't did his test homework.

@wshaikh
Copy link

wshaikh commented Oct 15, 2021

Let's say I have 10 files, of which 9 of them are 100%, and one of them is 0%, and I set my limit to 90% coverage. Currently, it will pass because it's taking the average, but I don't want it to pass. That one file has below my coverage percentage of 90% so it should fail.

When can we expect this feature?

@nedbat nedbat removed the question Further information is requested label Oct 15, 2021
@nedbat nedbat added this to the 6.x milestone Oct 15, 2021
@nedbat
Copy link
Owner

nedbat commented Oct 16, 2021

See also #717, which is similar.

@nedbat
Copy link
Owner

nedbat commented Oct 17, 2021

One option while waiting for coverage.py to add this as a feature: implement it as a separate tool. You can get a JSON report from coverage.py, and then check the totals for each file. This would be a way to experiment with different styles of rules also ("tests/" must have 100%, "project/" must have 90%, or whatever).

@nedbat
Copy link
Owner

nedbat commented Oct 31, 2021

I've written a proof-of-concept using the JSON report: https://github.com/nedbat/coveragepy/blob/master/lab/goals.py

Try it and let me know what you think.

@nedbat
Copy link
Owner

nedbat commented Nov 2, 2021

... and a blog post about it: https://nedbatchelder.com/blog/202111/coverage_goals.html

@nedbat nedbat removed this from the Next milestone Nov 5, 2021
@nedbat nedbat added the target label Dec 12, 2021
@RodriguezLucha
Copy link

Looking forward to this feature!

@nedbat
Copy link
Owner

nedbat commented Jan 18, 2022

@RodriguezLucha you can get it now: https://nedbatchelder.com/blog/202111/coverage_goals.html Or is there a reason that isn't sufficient?

@jdahlin
Copy link

jdahlin commented Aug 10, 2022

@netbat I've ended up reimplementing different adhoc variants of this feature over the years and personally I think it would make a lot of since to include this inside coverage.py itself, to reduce number of dependencies and have a standardize way of doing it. I'm willing to help out with implementing & documenting this feature if you agree that it should go into coverage.py itself.

@Kludex
Copy link
Contributor

Kludex commented Aug 21, 2022

@RodriguezLucha you can get it now: https://nedbatchelder.com/blog/202111/coverage_goals.html Or is there a reason that isn't sufficient?

It's easy to convince a team to just introduce a new configuration than a new file. 🤷‍♂️

I came across this issue 3 times already because I wanted to suggest it on different projects.

But well... The script should be enough. I would not suggest it tho, because the weight of having that file does not overcome the need for this functionality on a project. But... If it was in the coverage itself, it's just a line of configuration.

Anyway, I fully understand you. But if the feature was available on coverage, I'd probably use on every project that doesn't have 100% coverage already.

@chriselion
Copy link

This was super helpful to enforce full coverage on our test files (and uncovered so broken tests in the process).

Maybe as a middle ground, you could add this as a separate console_script in the coverage library, without actually making it to the coverage command?

One piece of feedback (which I can open a PR for if you want) would be to use logging.error on lines like

print(f"{result}, below {args.goal}")

so that they show up better in some CI systems (Bamboo was initially hiding this in one of the output panes).

@mebibou
Copy link

mebibou commented Jul 11, 2023

This feature would be quite interesting to have in this tool by default. I very often realise I missed testing a whole file because I imported a function from another file (bad copy-paste obviously), and only when looking at the details of all the files and seeing a 0% do I know I made a mistake. Since there are many files the total coverage is above 90%, but having this option would detect this mistake easily.

There are plenty of other tools in other languages that provide this by default, so why not here?

@nedbat
Copy link
Owner

nedbat commented Jul 16, 2023

why not here?

The usual tradeoff of having to support code, and wondering how much use it would get. I suppose it wouldn't be much work to add a new command coverage goal that had a similar command line to the goals.py program from my blog post. I'm just not sure how many people would find that useful.

@Kludex
Copy link
Contributor

Kludex commented Jul 16, 2023

Do you have a suggestion on how to estimate that?

I'd use it for uvicorn. 😬👍

@nedbat
Copy link
Owner

nedbat commented Jul 16, 2023

Do you have a suggestion on how to estimate that?

The best we can do is gauge from comments on issues, and guess.

@Kludex
Copy link
Contributor

Kludex commented Jul 16, 2023

If you can be more objective about what is needed to take a decision here, I can try to help... 👀

@nedbat
Copy link
Owner

nedbat commented Jul 16, 2023

Thanks for the offer, but there is nothing more objective. We don't have a way to poll the users of coverage.py.

@martin-thoma
Copy link

I'm not sure if this is the same: How can I enforce 100% line coverage for test files in Python?

@florian-guily
Copy link

I'll be interested to have this feature as well !

@EwertonDCSilv
Copy link

Também tenho interesse no tema !

@EwertonDCSilv
Copy link

Seria muito bom que a ferramenta desse suporte a isso, simplificando a vida, sem quer que re-implementar isso de forma ad hoc em vários projetos, para várias equipes

@ChillarAnand
Copy link

I work on multiple projects where specific files have 100% coverage threshold. Unfortunately, copying goals.py script to all repos is cumbersome.

If there are no plans to add a new command, I would like to create a separate Python package just for this script and slowly add other features if required.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request target
Projects
None yet
Development

No branches or pull requests