-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimization of Sync Records: Implementing Pagination and Temporary Table #6585
Optimization of Sync Records: Implementing Pagination and Temporary Table #6585
Conversation
…ings in chunks for deletion of recordings with missing files
✅ Deploy Preview for frigate-docs ready!
To edit notification comments on pull requests, go to your Netlify site settings. |
8f61470
to
a474eec
Compare
a474eec
to
094cbe9
Compare
What is the performance for the os.walk when there are tens of thousands of files in the directory? I think this was one of the problems we ran into. |
creating a set of files on disk give as a constant time (O(1)) search operation % python3 test.py 5 10000 % python3 test.py 5 100000
|
We should think carefully about when this runs. If a user temporarily mapped their recordings incorrectly or their NAS disconnected, this would drop all their recordings on restart. |
make sense. I'll think about mark event in database as unreachable instead of delete |
I think it would be best to use a simple heuristic instead. What if we try and detect when >50% of the recordings would be deleted and stop the automatic sync. We could also add a button on the Storage page that triggers a sync in "force" mode which can override the fail safe. We could also consider creating a hidden file in the recordings directory that we could check for on startup. If it's not there and we have recordings in the database, then we can assume there is an issue with the mounted storage. |
e1b1f15
to
32dc664
Compare
32dc664
to
1ec8bc0
Compare
Co-authored-by: Nicolas Mowen <[email protected]>
Looks like there are a few conflicts here, but this is good to merge when those are resolved. |
This pull request introduces significant changes to optimize the process of syncing records, resulting in a speed optimization of up to 400 times. A temporary table is created for record deletion, and pagination is used to process recordings in chunks. This strategy is particularly beneficial for handling the deletion of recordings with missing files efficiently and swiftly
Speed Comparison:
Current:
This PR: