Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Automatic Retries for Throttling by S3 During File Upload #409

Closed
1 of 2 tasks
ViswanathaReddyGajjala opened this issue Jun 26, 2024 · 7 comments
Closed
1 of 2 tasks
Labels
enhancement New feature or request

Comments

@ViswanathaReddyGajjala
Copy link

Describe the solution you'd like

When multiple files are uploaded to create a new bot, it often results in an Error Message: Request failed with status code 503. I want to implement automatic retries for these failed uploads. I need to understand if there are any considerations I should take into account before implementing this feature.

Why the solution needed

For every throttled request, we currently have to manually delete each file. This process can be quite frustrating for our customers. To improve the customer experience, we could implement automatic retries for these failed uploads

Additional context

I attempted to upload 200+ files and experienced over 140+ throttled uploads. To avoid throttling, I had to upload the files in batches, which was a frustrating process.

Implementation feasibility

Are you willing to discuss the solution with us, decide on the approach, and assist with the implementation?

  • Yes
  • No
@statefb statefb added the enhancement New feature or request label Jun 27, 2024
@statefb
Copy link
Contributor

statefb commented Jun 27, 2024

Ref: https://repost.aws/knowledge-center/http-5xx-errors-s3

@ViswanathaReddyGajjala
Copy link
Author

Thank you! I noticed that earlier, which is why I'm considering adding support for retries. If you believe this would be beneficial, I can spend some time on this.

@statefb
Copy link
Contributor

statefb commented Jun 27, 2024

@ViswanathaReddyGajjala The simplest way would be retrying uploadFile n times with a circuit breaker. It'd be appreciated if you could create a PR.

@ViswanathaReddyGajjala
Copy link
Author

I've tested it with 856 files and S3 didn't throttle the requests. Previously, my account had lower limits which led to the issue. If I encounter this issue in the future, I will raise a Pull Request to address it. For now, I am closing this issue.

@ViswanathaReddyGajjala
Copy link
Author

I have a follow-up question. For testing purposes, I've uploaded over 800+ files. These are stored in the _tmp folder within the main S3 bucket.

Will these files be automatically deleted after a certain period, or is there a specific process for this? Additionally, would adding appropriate tags to the bucket name be beneficial in this scenario? It would be easy to find the correct bucket.

@statefb
Copy link
Contributor

statefb commented Jul 9, 2024

Will these files be automatically deleted after a certain period, or is there a specific process for this?

The latter one. These files deleted on the each event. source

, would adding appropriate tags

What do you mean "tags" exactly? The doc says that we can add tag on the object, but not on the bucket. Another question is, if added, how can we access to the correct bucket easily?

@statefb
Copy link
Contributor

statefb commented Jul 9, 2024

Can I reopen this issue? I'm curious why you closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants