Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Object storage page, refresh formatting & cleanup #1465

Draft
wants to merge 15 commits into
base: main
Choose a base branch
from
Prev Previous commit
Next Next commit
cleanup default formatting
  • Loading branch information
vmstan committed Jun 17, 2024
commit a6f01f4449a4c23ed3f653df70297b9c077fe6a7
64 changes: 20 additions & 44 deletions content/en/admin/optional/object-storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,87 +111,63 @@ You must serve the files with CORS headers, otherwise some functions of Mastodon

#### `S3_OPEN_TIMEOUT`

Default: 5 (seconds)

The number of seconds before the HTTP handler should timeout while trying to open a new HTTP session.

#### `S3_READ_TIMEOUT`
Default: `5` (seconds)

Default: 5 (seconds)
#### `S3_READ_TIMEOUT`

The number of seconds before the HTTP handler should timeout while waiting for an HTTP response.

#### `S3_FORCE_SINGLE_REQUEST`
Default: `5` (seconds)

Default: false
#### `S3_FORCE_SINGLE_REQUEST`

Set this to `true` if you run into trouble processing large files.

Default: `false`

#### `S3_ENABLE_CHECKSUM_MODE`

Default: false
Enables verification of object checksums when Mastodon is retrieving an object from the storage provider. This feature is available in AWS S3 but may not be available in other S3-compatible implementations.

Enables verification of object checksums when Mastodon is retrieving
an object from the storage provider. This feature is available in AWS
S3 but may not be available in other S3-compatible implementations.
Default: `false`

#### `S3_STORAGE_CLASS`

Default: none
When using AWS S3, this variable can be set to one of the [storage class](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html) options which influence the storage selected for uploaded objects (and thus their access times and costs). If no storage class is specified then AWS S3 will use the `STANDARD` class, but options include `REDUCED_REDUNDANCY`, `GLACIER`, and others.

When using AWS S3, this variable can be set to one of the [storage
class](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html)
options which influence the storage selected for uploaded objects (and
thus their access times and costs). If no storage class is specified
then AWS S3 will use the `STANDARD` class, but options include
`REDUCED_REDUNDANCY`, `GLACIER`, and others.
Default: `STANDARD`

#### `S3_MULTIPART_THRESHOLD`

Default: 15 (megabytes)
Objects of this size and smaller will be uploaded in a single operation, but larger objects will be uploaded using the multipart chunking mechanism, which can improve transfer speeds and reliability.

Objects of this size and smaller will be uploaded in a single
operation, but larger objects will be uploaded using the multipart
chunking mechanism, which can improve transfer speeds and reliability.
Default: `15` (megabytes)

#### `S3_PERMISSION`

Default: `public-read`
Defines the S3 object ACL when uploading new files. Use caution when using [S3 Block Public Access](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html) and turning on the `BlockPublicAcls` option, as uploading objects with ACL `public-read` will fail (403). In that case, set `S3_PERMISSION` to `private`.

Defines the S3 object ACL when uploading new files. Use caution when
using [S3 Block Public
Access](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html)
and turning on the `BlockPublicAcls` option, as uploading objects with
ACL `public-read` will fail (403). In that case, set `S3_PERMISSION`
to `private`.
Default: `public-read`

{{< hint style="danger" >}}
Regardless of the ACL configuration, your
S3 bucket must be set up to ensure that all objects are publicly
readable but not writable or listable. At the same time, Mastodon
itself should have write access to the bucket. This configuration is
generally consistent across all S3 providers, and common ones are
highlighted below.
Regardless of the ACL configuration, your S3 bucket must be set up to ensure that all objects are publicly readable but not writable or listable. At the same time, Mastodon itself should have write access to the bucket. This configuration is generally consistent across all S3 providers, and common ones are highlighted below.
{{</ hint >}}

#### `S3_BATCH_DELETE_LIMIT`

Default: `1000`
The official [Amazon S3 API](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html) can handle deleting 1,000 objects in one batch job, but some providers may have issues handling this many in one request, or offer lower limits.

The official [Amazon S3
API](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html)
can handle deleting 1,000 objects in one batch job, but some providers
may have issues handling this many in one request, or offer lower
limits.
Default: `1000`

#### `S3_BATCH_DELETE_RETRY`

Default: 3

During batch delete operations, S3 providers may perodically fail or
timeout while processing deletion requests. Mastodon will back off and
During batch delete operations, S3 providers may perodically fail or timeout while processing deletion requests. Mastodon will back off and
retry the request up to this maximum number of times.

Default: `3`

### MinIO

MinIO is an open-source implementation of an S3 object provider. This section does not cover how to install it, but how to configure a bucket for use in Mastodon.
Expand Down