Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Object storage page, refresh formatting & cleanup #1465

Draft
wants to merge 15 commits into
base: main
Choose a base branch
from
Prev Previous commit
Next Next commit
Revert "clean up default formatting"
This reverts commit a2dbe1d.
  • Loading branch information
vmstan committed Jun 17, 2024
commit 2bfee836ac1d10f7d0471f4913fa1791cc7de737
64 changes: 44 additions & 20 deletions content/en/admin/optional/object-storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,62 +111,86 @@ You must serve the files with CORS headers, otherwise some functions of Mastodon

#### `S3_OPEN_TIMEOUT`

The number of seconds before the HTTP handler should timeout while trying to open a new HTTP session.
Default: 5 (seconds)

Default: `5` (seconds)
The number of seconds before the HTTP handler should timeout while trying to open a new HTTP session.

#### `S3_READ_TIMEOUT`

The number of seconds before the HTTP handler should timeout while waiting for an HTTP response.
Default: 5 (seconds)

Default: `5` (seconds)
The number of seconds before the HTTP handler should timeout while waiting for an HTTP response.

#### `S3_FORCE_SINGLE_REQUEST`

Set this to `true` if you run into trouble processing large files.
Default: false

Default: `false`
Set this to `true` if you run into trouble processing large files.

#### `S3_ENABLE_CHECKSUM_MODE`

Enables verification of object checksums when Mastodon is retrieving an object from the storage provider. This feature is available in AWS S3 but may not be available in other S3-compatible implementations.
Default: false

Default: `false`
Enables verification of object checksums when Mastodon is retrieving
an object from the storage provider. This feature is available in AWS
S3 but may not be available in other S3-compatible implementations.

#### `S3_STORAGE_CLASS`

When using AWS S3, this variable can be set to one of the [storage class](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html) options which influence the storage selected for uploaded objects (and thus their access times and costs). If no storage class is specified then AWS S3 will use the `STANDARD` class, but options include `REDUCED_REDUNDANCY`, `GLACIER`, and others.
Default: none

Default: `STANDARD`
When using AWS S3, this variable can be set to one of the [storage
class](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html)
options which influence the storage selected for uploaded objects (and
thus their access times and costs). If no storage class is specified
then AWS S3 will use the `STANDARD` class, but options include
`REDUCED_REDUNDANCY`, `GLACIER`, and others.

#### `S3_MULTIPART_THRESHOLD`

Objects of this size and smaller will be uploaded in a single operation, but larger objects will be uploaded using the multipart chunking mechanism, which can improve transfer speeds and reliability.
Default: 15 (megabytes)

Default: `15` (megabytes)
Objects of this size and smaller will be uploaded in a single
operation, but larger objects will be uploaded using the multipart
chunking mechanism, which can improve transfer speeds and reliability.

#### `S3_PERMISSION`

Defines the S3 object ACL when uploading new files. Use caution when using [S3 Block Public Access](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html) and turning on the `BlockPublicAcls` option, as uploading objects with ACL `public-read` will fail (403). In that case, set `S3_PERMISSION` to `private`.

Default: `public-read`

Defines the S3 object ACL when uploading new files. Use caution when
using [S3 Block Public
Access](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html)
and turning on the `BlockPublicAcls` option, as uploading objects with
ACL `public-read` will fail (403). In that case, set `S3_PERMISSION`
to `private`.

{{< hint style="danger" >}}
Regardless of the ACL configuration, your S3 bucket must be set up to ensure that all objects are publicly readable but not writable or listable. At the same time, Mastodon itself should have write access to the bucket. This configuration is generally consistent across all S3 providers, and common ones are highlighted below.
Regardless of the ACL configuration, your
S3 bucket must be set up to ensure that all objects are publicly
readable but not writable or listable. At the same time, Mastodon
itself should have write access to the bucket. This configuration is
generally consistent across all S3 providers, and common ones are
highlighted below.
{{</ hint >}}

#### `S3_BATCH_DELETE_LIMIT`

The official [Amazon S3 API](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html) can handle deleting 1,000 objects in one batch job, but some providers may have issues handling this many in one request, or offer lower limits.

Default: `1000`

The official [Amazon S3
API](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html)
can handle deleting 1,000 objects in one batch job, but some providers
may have issues handling this many in one request, or offer lower
limits.

#### `S3_BATCH_DELETE_RETRY`

During batch delete operations, S3 providers may perodically fail or timeout while processing deletion requests. Mastodon will back off and
retry the request up to this maximum number of times.
Default: 3

Default: `3`
During batch delete operations, S3 providers may perodically fail or
timeout while processing deletion requests. Mastodon will back off and
retry the request up to this maximum number of times.

### MinIO

Expand Down