Skip to content

Tags: ankon/compress

Tags

v1.17.7

Toggle v1.17.7's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
tests: Rename fuzz helpers back. (klauspost#931)

v1.17.6

Toggle v1.17.6's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
s2: Fix DecodeConcurrent deadlock on errors (klauspost#925)

When DecodeConcurrent encounters an error it can lock up in some cases.

Fix and add fuzz test for stream decoding.

Fixes klauspost#920

v1.17.5

Toggle v1.17.5's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
s2: Document and test how to peek the stream for skippable blocks (kl…

…auspost#918)

Co-authored-by: Klaus Post <[email protected]>

v1.17.4

Toggle v1.17.4's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
gzhttp: Allow overriding decompression on transport (klauspost#892)

This allows getting compressed data even if `Content-Encoding` is set.

Also allows decompression even if "Accept-Encoding" was not set by this client.

v1.17.3

Toggle v1.17.3's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
gzhttp: Fix missing content type on Close (klauspost#883)

If compression had not yet been triggered in Write, be sure to detect content type on Close.

Fixes klauspost#882

v1.17.2

Toggle v1.17.2's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
zstd: Fix corrupted output in "best" (klauspost#876)

* zstd: Fix corrupted output in "best"

Regression from klauspost#784 and followup klauspost#793

Fixes klauspost#875

A 0 offset backreference was possible when "improve" was successful twice in a row in the "skipBeginning" part, only finding 2 (previously unmatches) length 4 matches, but where start offset decreased by 2 in both cases.

This would result in output where the end offset would equal to the next 's', thereby doing a self-reference.

Add a general check in "improve" and just reject these. Will also guard against similar issues in the future.

This also hints at some potentially suboptimal hash indexing - but I will take that improvement separately.

Fuzz test set updated.

v1.17.1

Toggle v1.17.1's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
s2: Fix S2 best dictionary wrong encoding (klauspost#871)

A dictionary match one byte out of dictionary range was possible.

Recovery is possible. Send me a request if you find any content with this error in the [discussion](https://github.com/klauspost/compress/discussions/categories/general).

v1.17.0

Toggle v1.17.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
gzip: Copy bufio.Reader on Reset (klauspost#860)

The code already checks to see if the buffer can be reused, but since
it's not copied in the overwrite, a new buffer is allocated each time.

v1.16.7

Toggle v1.16.7's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
s2: add GetBufferCapacity() method (klauspost#832)

Add GetBufferCapacity() method. We are reusing readers with sync.Pool
and we'd like to avoid allocating memory for the default block size
since most of the inputs are smaller. To have a better estimate of how
big the lazy buffer should be, we are thinking about keeping in mind a
running average of the internal buffer capacities. This method would
allow us to implement that.

v1.16.6

Toggle v1.16.6's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
s2: Clean up matchlen assembly (klauspost#825)

* s2: Clean up matchlen assembly