-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integration test improvements #2858
Changes from 1 commit
02ccedd
07c059e
364d0d2
9dd9ac0
c21576f
4308b2c
b22f1eb
9c4fbfc
ffabc9f
2ced8fb
7559e55
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
- Loading branch information
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -28,7 +28,11 @@ teardown() { | |
DID_TEAR_DOWN=1 | ||
|
||
if [ "$1" != "EXIT" ]; then | ||
echo "An error occurred, caught SIG$1 on line $2" | ||
error_msg="An error occurred, caught SIG$1 on line $2" | ||
echo "$error_msg" | ||
dsn="https://[email protected]/6627632" | ||
sentry_cli="docker run --rm -v /tmp:/work -e SENTRY_DSN=$dsn getsentry/sentry-cli" | ||
$sentry_cli send-event -m "$error_msg" --logfile "$log_file" | ||
fi | ||
|
||
echo "Tearing down ..." | ||
|
@@ -41,10 +45,10 @@ echo "${_endgroup}" | |
echo "${_group}Starting Sentry for tests ..." | ||
# Disable beacon for e2e tests | ||
echo 'SENTRY_BEACON=False' >>$SENTRY_CONFIG_PY | ||
echo y | $dcr web createuser --force-update --superuser --email $TEST_USER --password $TEST_PASS | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. docker run is slower than just an exec into a running container There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It is but it is also intentional to keep the run separate from the main web process. That said for a one-off thing like this, I think using |
||
$dc up -d | ||
printf "Waiting for Sentry to be up" | ||
timeout 90 bash -c 'until $(curl -Isf -o /dev/null $SENTRY_TEST_HOST); do printf '.'; sleep 0.5; done' | ||
echo y | $dc exec web sentry createuser --force-update --superuser --email $TEST_USER --password $TEST_PASS | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What happens if we exec into the web container but it isn't ready yet? Does docker wait until it is ready, or does this fail? If it's the former, we should There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Docker compose up will only succeed if the container healthcheck for web passes, so I don't think this will be a problem. That is performed on a previous line, so when the tests get to the createuser logic the web container will always be ready |
||
printf "Waiting for Sentry to be up" | ||
echo "" | ||
echo "${_endgroup}" | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just so I understand: what was the behavior before you added these settings? Unlimited retries? No timeout so it hung until the action crashed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tests would fail from flakes way too often, adding this in drastically increases the chance the tests pass when they should
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, in effect, the previous (implicit) setting was
max_attempts: 1
? Ortimeout_minutes: Infinity
? Or some combination of the two? I get that the problem we are trying to solve is flakiness, I'm just not clear how changing (raising? lowering?)max_attempts
andtimeout_minutes
helps that, since it's not obvious what the current state of affairs is.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, that's correct. The
max_attempts
really is just the first step into adding flaky test detection. If a job fails, but then is retried and succeed, it can be marked as flaky. Thetimeout_minutes
is a required parameter here. I can remove this and readd in a follow-up PR if that is more clearThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, that's fine. I just wanted to understand the change. LGTM!