Tags: ankon/compress
Tags
s2: Fix DecodeConcurrent deadlock on errors (klauspost#925) When DecodeConcurrent encounters an error it can lock up in some cases. Fix and add fuzz test for stream decoding. Fixes klauspost#920
s2: Document and test how to peek the stream for skippable blocks (kl… …auspost#918) Co-authored-by: Klaus Post <[email protected]>
gzhttp: Allow overriding decompression on transport (klauspost#892) This allows getting compressed data even if `Content-Encoding` is set. Also allows decompression even if "Accept-Encoding" was not set by this client.
gzhttp: Fix missing content type on Close (klauspost#883) If compression had not yet been triggered in Write, be sure to detect content type on Close. Fixes klauspost#882
zstd: Fix corrupted output in "best" (klauspost#876) * zstd: Fix corrupted output in "best" Regression from klauspost#784 and followup klauspost#793 Fixes klauspost#875 A 0 offset backreference was possible when "improve" was successful twice in a row in the "skipBeginning" part, only finding 2 (previously unmatches) length 4 matches, but where start offset decreased by 2 in both cases. This would result in output where the end offset would equal to the next 's', thereby doing a self-reference. Add a general check in "improve" and just reject these. Will also guard against similar issues in the future. This also hints at some potentially suboptimal hash indexing - but I will take that improvement separately. Fuzz test set updated.
s2: Fix S2 best dictionary wrong encoding (klauspost#871) A dictionary match one byte out of dictionary range was possible. Recovery is possible. Send me a request if you find any content with this error in the [discussion](https://github.com/klauspost/compress/discussions/categories/general).
gzip: Copy bufio.Reader on Reset (klauspost#860) The code already checks to see if the buffer can be reused, but since it's not copied in the overwrite, a new buffer is allocated each time.
s2: add GetBufferCapacity() method (klauspost#832) Add GetBufferCapacity() method. We are reusing readers with sync.Pool and we'd like to avoid allocating memory for the default block size since most of the inputs are smaller. To have a better estimate of how big the lazy buffer should be, we are thinking about keeping in mind a running average of the internal buffer capacities. This method would allow us to implement that.
s2: Clean up matchlen assembly (klauspost#825) * s2: Clean up matchlen assembly
PreviousNext