-
Notifications
You must be signed in to change notification settings - Fork 298
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DOCS-1159: RELEASE.2024-03-15T01-07-19Z #1211
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM mod typo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple of suggestions for you.
Otherwise, looks good to me.
You can change this value after startup to any value between ``0`` and the upper bound for the erasure set size. | ||
MinIO only applies the changed parity to newly written objects. | ||
Existing objects retain the parity value in place at the time of their creation. | ||
MinIO by default automatically "upgrades" parity for an object if the destination erasure set maintains write quorum *but* has one or more drives offline. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MinIO by default automatically "upgrades" parity for an object if the destination erasure set maintains write quorum *but* has one or more drives offline. | |
MinIO by default automatically "upgrades" parity for an object if the destination erasure set maintains write quorum *but* has one or more drives offline. | |
The object written on the destination maintains the same number of data shards, but a reduced number of parity shards compared to the original. |
I think we need an example to clarify how MinIO "upgrades" parity.
I might have gotten it backwards in my suggestion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think so as well - @harshavardhana or perhaps @klauspost can confirm but:
in an erasure set of 16 drives and EC:4, if 1 drive goes online, we write with EC:5.
so the new object is 11:5 while other objects are 12:4. Since a drive is offline, we get 11 data blocks and 4 parity blocks, but the object is maintaining the 'quorum' of all other objects.
If you were to set this for capacity mode, we would write 12:4 and just leave off the last parity block, such that this object has reduced quorum (just like all the other objects on that set).
AFAIK healing does not correct parity upgrades, though it would 'fix' the object written in capacity mode.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also implies we upgrade parity for each 'down' drive up to quorum.
I assume the issue is that the smaller number of data shards means those shards are larger, therefore there are some drives that end up hot-spotted if the erasure set operates in it's degraded state for a long time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@djwfyi i might fire this off now and return to parity upgrade behavior in a dedicated PR, since it's likely nuanced.
Co-authored-by: Daryl White <[email protected]> Co-authored-by: Andrea Longo <[email protected]>
Partially addresses #1159
Excluding Metrics v3 since that work is ongoing, and it's not fully baked as of yet.
All changes are effectively config level.
Also fixed up some format/hierarchy issues while I was at it.
Staged: