-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Storing records on remote s3 storage #6586
base: dev
Are you sure you want to change the base?
WIP: Storing records on remote s3 storage #6586
Conversation
…pload in recording clips and cleanup. Add a storage type field to recordings
…tarts with 'http:https://' in StorageS3 constructor
✅ Deploy Preview for frigate-docs ready!
To edit notification comments on pull requests, go to your Netlify site settings. |
The VOD module needs to read the metadata of every segment file for recordings playback. It has a cache, but I imagine this would be incredibly slow if every recording file has to be fetched. How long does it take to load an hour long m3u8 playlist when you have 24/7 recordings in all mode? |
nginx vod module has "remote" and "mapped" modes for work with non-local files |
def __init__(self, config: FrigateConfig) -> None: | ||
self.config = config | ||
if self.config.storage.s3.enabled or self.config.storage.s3.archive: | ||
if self.config.storage.s3.endpoint_url.startswith("http:https://"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we use ssl by default here?
try: | ||
total_size = 0 | ||
total_files = 0 | ||
for obj in self.s3_client.list_objects(Bucket=self.s3_bucket).get( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should use list_objects_v2
and use pagination. Otherwise 1000+ objects will fail.
Something along those liines:
import boto3
def calculate_s3_bucket_size_and_file_count(bucket_name):
s3 = boto3.client('s3')
total_size = 0
total_files = 0
paginator = s3.get_paginator('list_objects_v2')
for page in paginator.paginate(Bucket=bucket_name):
for obj in page['Contents']:
total_size += obj['Size']
total_files += 1
return total_size, total_files
bucket_name = 'my-bucket' # Replace with your bucket name
total_size, total_files = calculate_s3_bucket_size_and_file_count(bucket_name)
print(f'Total size: {total_size / 1024**3} GB') # Convert bytes to GB
print(f'Total number of files: {total_files}')
self.s3_client = session.create_client( | ||
"s3", | ||
aws_access_key_id=self.config.storage.s3.access_key_id, | ||
aws_secret_access_key=self.config.storage.s3.secret_access_key, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are forcing the use of keys.
If config.storage.s3.access_key_id
is NOT set, you should follow the default chain that allows users to use environment variable, ~/.aws/credentials
etc.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
@skrashevich this is a really awesome feature. I left a few comments for improvements. I would disable s3 stats by default. Give users the option to enable it. On buckets with millions of objects this can become a slow operation due to its recursive nature. Also, I think you are gathering stats for the whole bucket. What if a user uses "/frigate" as their base path. Wouldn't we want stats only for that path? @blakeblackshear, I'm not an expert on VOD. That said, s3 is commonly used to store the main playlist and HLS files for VOD platforms. Hence, with the correct config wouldn't expect any performance issues. |
I suggest adding a button next to a clip "Upload to S3" or option to "Upload only favorite clips". I personally do not have the bandwidth to upload all the vide that is captured but I would love to be able to backup specific certain clips. |
any update? |
There are couple of solution than you do yourself
|
Work in progress.