Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: Only check the nightly tests on dafny-lang #3352

Open
wants to merge 8 commits into
base: master
Choose a base branch
from

Conversation

fabiomadge
Copy link
Collaborator

By submitting this pull request, I confirm that my contribution is made under the terms of the MIT license.

@davidcok
Copy link
Collaborator

Don't do this -- there is no dependency of the doctests on the deep tests. It is useful to be able to check them even if Depp tests fail, even though they will have to wait until the deep tests pass to be merged.

Copy link
Collaborator

@davidcok davidcok left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please revert/don't merge these changes. This is adding dependencies in our CI, when we should be reducing them.

@fabiomadge
Copy link
Collaborator Author

This condition should now always pass when not running on dafny-lang/dafny. The rational for not running them is keeping nonessential load off CI while it's broken.

@fabiomadge fabiomadge marked this pull request as draft January 11, 2023 01:05
@davidcok
Copy link
Collaborator

Let me check my understanding. When pushes are made to a PR on a fork, do those resulting CI runs happen in a user's own account (and therefore will run anyway), despite a recent failure of the nightly tests?

@fabiomadge
Copy link
Collaborator Author

It depends on the target of the PR. If a PR is specified to merge into dafny-lang/dafny, it will run on the dafny-lang/dafny infrastructure, but if it will end up in the fork, the fork is footing the bill.

@fabiomadge fabiomadge marked this pull request as ready for review January 11, 2023 02:00
@keyboardDrummer
Copy link
Member

Is there data to indicate we need to reduce load when the nightly is failing? It's costly to prevent other tests from running while nightly is broken since they block people from developing.

@fabiomadge
Copy link
Collaborator Author

What data are you looking for? This change doesn't block development in additional circumstances, but does unblocks us faster.

@keyboardDrummer
Copy link
Member

keyboardDrummer commented Jan 13, 2023

What data are you looking for? This change doesn't block development in additional circumstances, but does unblocks us faster.

Doesn't this PR add additional deep-test checks for different workflows, for example here? There might be some confusion because this PR also changes the deep-test so it does not run on forks.

@fabiomadge
Copy link
Collaborator Author

Yes, in case a block is in place, it will apply to more tests, but it doesn't make the blocking more frequent. I don't think the marginal value of additional tests is constant. If they all run, then you're happy, as soon as a single required test is missing, you can't merge anyway. So there is limited value, while still consuming the scarce resources better used for fixing the broken tests.

@keyboardDrummer
Copy link
Member

keyboardDrummer commented Jan 16, 2023

Yes, in case a block is in place, it will apply to more tests, but it doesn't make the blocking more frequent. I don't think the marginal value of additional tests is constant. If they all run, then you're happy, as soon as a single required test is missing, you can't merge anyway.

Shall we figure whether we should block (all) tests on the nightly failing or not, which this PR relates to? For that it's useful to know whether there is data to suggest not doing so causes a resource scarcity.

@fabiomadge
Copy link
Collaborator Author

I personally don't think gathering this data is worth the effort, but you can use your time how you see fit.

@keyboardDrummer
Copy link
Member

keyboardDrummer commented Jan 17, 2023

I personally don't think gathering this data is worth the effort, but you can use your time how you see fit.

I was asking whether you have existing data that leads you to say the CI resources are too scarce to let us do regular builds while nightly is broken.

If we don't have that, why don't we try allowing the build to run when nightly is broken? If that turns out to cause a resource scarcity, we can disallow it again.

@fabiomadge
Copy link
Collaborator Author

I suspect we have different thresholds when it comes to accepting something as data. I did experience other builds delaying my CI fixes. Otherwise I wouldn't propose this change. I suspect this will only get worse with the new test schedule.

@keyboardDrummer
Copy link
Member

keyboardDrummer commented Jan 18, 2023

I did experience other builds delaying my CI fixes. Otherwise I wouldn't propose this change.

Okay, I didn't realise that. I haven't noticed it, but maybe I didn't pay enough attention.

I'll leave it up to you and @davidcok to decide, but note that if we don't block the build we get to know better if it causes scarcity, and if we do block the build then we don't.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants