Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

skip callbacks when a hoster is unreachable & cleanup/refactoring #22

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

geoheelias
Copy link
Contributor

No description provided.

@geoheelias geoheelias requested a review from puhoy August 13, 2021 19:33
@geoheelias geoheelias self-assigned this Aug 13, 2021
repos = run_block(block_data)
except (MaxRetryError, ConnectionError, Timeout, TooManyRedirects):
logger.exception("hosting service not reachable - no indexer callback issued")
return
_hoster_session_request(
"PUT", session, url=block_data[BLOCK_KEY_CALLBACK_URL], json=repos
)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is the main part that fixes the potential issue - we simply dont issue the callback request if we encounter these specific exceptions.

All other exceptions are handled like previously, meaning they will only cause the chunk where they occur in to be empty - this one instead ignores any crawled content (if any) and just drops the block without a callback, and asks for the next block (if we're in the automated workflow like in docker-compose).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

im not sure anymore if we can deal with it like that.
since we only clean up dead blocks on add_blocks (when the crawler sends the "answer"), we would only ever hand out more blocks, but never clean up these dead blocks.
also, right now, in the indexer we are scheduling blocks like "hoster with the oldest run timestamp first", meaning that crawling would be stuck on this hoster the indexer never get answers for. (and even if we change the way we schedule, this hoster would only pile up more and more dead blocks, until redis uses up all the ram and crashes)

maybe we should think about a proper communication protocol first, something that wraps the repo list we return in some kind of state, so that the indexer has a way to, for example, pause a run on a hoster for a day on connection errors, or end the run completely without making an export - something like that?

except Exception as e:
logger.exception(f"(skipping block chunk) gitea crawler crashed")
return False, [], state # nr.2 - we skip rest of this block, hope we get it next time
return False, [], state, e # nr.2 - we skip rest of this block, hope we get it next time
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First except with specific exceptions are the ones we want to re-raise as they mean we have a complete failure for the block - we could try to salvage these requests more but I think we should start like this.

the other side, all other exceptions are like I said handled like before, we catch them and continue (but now we also yield any caught exceptions that occured).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants