forked from consuldemocracy/consuldemocracy
-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flaky spec: Commenting proposals Voting comments Update #1605
Labels
Comments
I'll have a look at this one. It seems to be similar to the tests fixed by consul#2734, so it should be done quickly. |
A similar one failed on forked repo build 12, job 9, and it's also solved by the same pull request:
|
Closed via #1606. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
What
Tests that fail randomly are called "flakies", this one seems to be one:
Randomized seed: 2812
Travis failed builds:
Failure:
How
Tips for flaky hunting
Random values issues
If the problem comes from randomly generated values, running multiple times a single spec could help you reproduce the failure by running at your command line:
You can also try running a single spec in Travis:
Add option
:focus
to the spec and push your branch to Github, for example:But remember to remove that
:focus
changes afterwards when submitting your PR changes!Test order issues
Running specs in the order they failed may discover that the problem is that a previous test sets an state in the test environment that makes our flaky fail/pass. Tests should be independent from the rest.
After executing rspec you can see the seed used, add it as an option to rspec, for example:
bin/rspec --seed 55638
(check Randomized seed value at beginning of issue)Other things to watch for
Final thoughts
The true objective of this issue is not "to fix" a flaky spec, but to change a spec that randomly fails into one that we can trust when it passes (or fails).
That means you've to think "What are we testing here?" and "Can we test the same thing in another way?" or "Are we already testing this somewhere else at least partially?".
Sometimes the fix is to re-write the entire tests with a different approach, or to extend an existing spec. Tests are sometimes written without taking a look at other tests files neither near scenarios.
The text was updated successfully, but these errors were encountered: