Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gzhttp: Add BREACH mitigation #762

Merged
merged 4 commits into from
Feb 28, 2023
Merged

gzhttp: Add BREACH mitigation #762

merged 4 commits into from
Feb 28, 2023

Conversation

klauspost
Copy link
Owner

@klauspost klauspost commented Feb 27, 2023

See #761

BREACH mitigation

BREACH is a specialized attack where attacker controlled data is injected alongside secret data in a response body. This can lead to sidechannel attacks, where observing the compressed response size can reveal if there are overlaps between the secret data and the injected data.

For more information see https://breachattack.com/

It can be hard to judge if you are vulnerable to BREACH. In general, if you do not include any user provided content in the response body you are safe, but if you do, or you are in doubt, you can apply mitigations.

gzhttp can apply Heal the Breach, or improved content aware padding.

// RandomJitter adds 1->n random bytes to output based on checksum of payload.
// Specify the amount of input to buffer before applying jitter.
// This should cover the sensitive part of your response.
// This can be used to obfuscate the exact compressed size.
// Specifying 0 will use a buffer size of 64KB.
// If a negative buffer is given, the amount of jitter will not be content dependent.
// This provides *less* security than applying content based jitter.
func RandomJitter(n, buffer int) option {
...

The jitter is added as a "Comment" field. This field has a 1 byte overhead, so actual extra size will be 2 -> n+1 (inclusive).

A good option would be to apply 32 random bytes, with default 64KB buffer: gzhttp.RandomJitter(32, 0).

Note that flushing the data forces the padding to be applied, which means that only data before the flush is considered for content aware padding.

Examples

Adding the option gzhttp.RandomJitter(32, 50000) will apply from 1 up to 32 bytes of random data to the output.

The number of bytes added depends on the content of the first 50000 bytes, or all of them if the output was less than that.

Adding the option gzhttp.RandomJitter(32, -1) will apply from 1 up to 32 bytes of random data to the output. Each call will apply a random amount of jitter. This should be considered less secure than content based jitter.

This can be used if responses are very big, deterministic and the buffer size would be too big to cover where the mutation occurs.

See #761

## BREACH mitigation

[BREACH](http:https://css.csail.mit.edu/6.858/2020/readings/breach.pdf) is a specialized attack where attacker controlled data
is injected alongside secret data in a response body. This can lead to sidechannel attacks, where observing the compressed response
size can reveal if there are overlaps between the secret data and the injected data.

For more information see https://breachattack.com/

It can be hard to judge if you are vulnerable to BREACH.
In general, if you do not include any user provided content in the response body you are safe,
but if you do, or you are in doubt, you can apply mitigations.

`gzhttp` can apply [Heal the Breach](https://ieeexplore.ieee.org/document/9754554), or improved content aware padding.

```Go
// RandomJitter adds 1->n random bytes to output based on checksum of payload.
// Specify the amount of input to buffer before applying jitter.
// This should cover the sensitive part of your response.
// This can be used to obfuscate the exact compressed size.
// Specifying 0 will use a buffer size of 64KB.
// If a negative buffer is given, the amount of jitter will not be content dependent.
// This provides *less* security than applying content based jitter.
func RandomJitter(n, buffer int) option {
...
```

The jitter is added as a "Comment" field. This field has a 1 byte overhead, so actual extra size will be 2 -> n+1 (inclusive).

A good option would be to apply 32 random bytes, with default 64KB buffer: `gzhttp.RandomJitter(32, 0)`.

Note that flushing the data forces the padding to be applied, which means that only data before the flush is considered for content aware padding.

### Examples

Adding the option `gzhttp.RandomJitter(32, 50000)` will apply from 1 up to 32 bytes of random data to the output.

The number of bytes added depends on the content of the first 50000 bytes, or all of them if the output was less than that.

Adding the option `gzhttp.RandomJitter(32, -1)` will apply from 1 up to 32 bytes of random data to the output.
Each call will apply a random amount of jitter. This should be considered less secure than content based jitter.

This can b
@d-z-m
Copy link

d-z-m commented Feb 28, 2023

Adding the option gzhttp.RandomJitter(32, 50000) will apply from 1 up to 32 bytes of random data to the output.

The padding isn't random, correct? It is derived from the c.randomJitter = bytes.Repeat([]byte("Padding-"), 1+(n/8)) buffer. While the length of the padding is the little-endian Uint32 representation of the first 4 bytes of the SHA256 checksum of the page.

@klauspost
Copy link
Owner Author

Correct.

The padding in the comment is Padding-Padding-Padding-Padding-Pad.....

The length is 1 + sha256(payload) MOD max_length or just random from crypto/rand if buffer < 0.

@d-z-m
Copy link

d-z-m commented Feb 28, 2023

Correct.

The padding in the comment is Padding-Padding-Padding-Padding-Pad.....

The length is 1 + sha256(payload) MOD max_length

Gotcha, ideally the gzhttp/README.md would reflect this, or say something equivalent. Otherwise LGTM 👍

@klauspost klauspost merged commit aeed811 into master Feb 28, 2023
@klauspost klauspost deleted the breach-mitigation branch February 28, 2023 14:17
@greatroar
Copy link
Contributor

Is this really necessary? Not Heal the Breach I mean, but the random jitter? If I understand the paper correctly, HTB only needs a filename or comment field of random length, not random content, because neither of those fields affects the compression of the content. See, e.g., the Django implementation.

@klauspost
Copy link
Owner Author

klauspost commented Mar 3, 2023

@greatroar The content is not random.

The padding in the comment is Padding-Padding-Padding-Padding-Pad.....

Literally.

(and it uses the comment, not the file name)

@greatroar
Copy link
Contributor

Sorry, I still misunderstood what you said. Never mind.

@klauspost
Copy link
Owner Author

@greatroar With a random padding you can still easily deduce the compressed size. With 100 bytes padding:

package main

import (
	"bytes"
	"compress/gzip"
	"crypto/rand"
	"fmt"
	"math"
)

func main() {
	payload := `Error: msgp: wanted array of size 7; got 5 at Cache/testbucket/go3/src/runtime/mranges_test.go (msgp.ArrayError)
       5: e:\minio\minio\internal\logger\logger.go:258:logger.LogIf()
       4: e:\minio\minio\internal\logger\logonce.go:104:logger.(*logOnceType).logOnceIf()
       3: e:\minio\minio\internal\logger\logonce.go:135:logger.LogOnceIf()
       2: e:\minio\minio\cmd\data-usage-cache.go:903:cmd.(*dataUsageCache).load()
       1: e:\minio\minio\cmd\erasure.go:467:cmd.erasureObjects.nsScanner.func3()`

	const maxDelta = 99
	tries := 0
	var buf bytes.Buffer
	gw := gzip.NewWriter(&buf)
	gw.Write([]byte(payload))
	gw.Close()
	want := buf.Len()
	var minSeen, maxSeen = math.MaxInt32, 0
	for {
		buf.Reset()
		gw.Reset(&buf)
		// Heal the Breach, following the paper:
		// https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9754554.
		// Essentially, we set the filename in the gzip header to a string
		// of random length.
		l, err := randomLength()
		if err != nil {
			panic(err)
		}
		gw.Header.Name = htbFilename[:l]
		gw.Write([]byte(payload))
		gw.Close()
		if buf.Len() < minSeen {
			minSeen = buf.Len()
		}
		if buf.Len() > maxSeen {
			maxSeen = buf.Len()
		}
		tries++
		if maxSeen-minSeen == maxDelta {
			got := minSeen - 2
			if want != got {
				fmt.Println("wow, so wrong, want:", want, "!= got:", got)
			}
			fmt.Println("Compressed size is", got, " That took", tries, "roundtrips to figure out.")
			return
		}
	}
}

// Filename for Heal the Breach. Any ASCII string will do.
const htbFilename = "" +
	"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" +
	"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

// Returns a random filename length for Heal the Breach.
// The length is uniformly drawn from [1,100]. Such lengths are accepted
// by all browsers, according to the Heal the Breach paper.
func randomLength() (int, error) {
	const maxtries = 1_000_000
	var htbBuf [1]byte
	// Rejection sampling.
	for i := 0; i < maxtries; i++ {
		n, err := rand.Reader.Read(htbBuf[:])
		if n == 0 || err != nil {
			return 0, fmt.Errorf("gzhttp: read crypto/rand: %w", err)
		}

		b := htbBuf[0]
		b >>= 1
		if b < 100 {
			return 1 + int(b), nil
		}
	}

	err := fmt.Errorf("gzhttp: no number < 100 in %d tries", maxtries)
	return 0, err
}

Output:

Compressed size is 280  That took 104 roundtrips to figure out.

Content based padding will make that impossible whenever the sensitive content is within the buffer.

@greatroar
Copy link
Contributor

The HTB paper says

The size of each request will only be reliable after making several queries with the same input and computing the average size.

But determining the average isn't necessary, as your exploit shows. Did the authors just assume the attacker doesn't know the maximum padding length for the mitigation? 😮

@klauspost
Copy link
Owner Author

Yeah. Even so determining that the padding delta is 99 is also pretty trivial. With, say, 400 tries you can determine that pretty reliably.

You just need the delta and the min size. Here is your paper. You can call it "cure the breach" 😜

@d-z-m
Copy link

d-z-m commented Mar 4, 2023

The HTB mitigation is effectively just adding noise to the channel. The aim is to make the attack impractical, by making it substantially more difficult.

On average for padding delta of 100, it takes ~150 round trips to deduce the compressed size, which corresponds to a single test of a test character in the paper. The number of requests an attacker would have to make to extract enough secret characters from the response body will be very large, and entirely detectable/blockable by an IDS/WAF capability.

package main

import (
	"bytes"
	"compress/gzip"
	"crypto/rand"
	"fmt"
	"math"
)

func main() {
	payload := `Error: msgp: wanted array of size 7; got 5 at Cache/testbucket/go3/src/runtime/mranges_test.go (msgp.ArrayError)
       5: e:\minio\minio\internal\logger\logger.go:258:logger.LogIf()
       4: e:\minio\minio\internal\logger\logonce.go:104:logger.(*logOnceType).logOnceIf()
       3: e:\minio\minio\internal\logger\logonce.go:135:logger.LogOnceIf()
       2: e:\minio\minio\cmd\data-usage-cache.go:903:cmd.(*dataUsageCache).load()
       1: e:\minio\minio\cmd\erasure.go:467:cmd.erasureObjects.nsScanner.func3()`

	const maxDelta = 99
	const attemptsToAvg = 1000
	tries := 0
	var buf bytes.Buffer
	gw := gzip.NewWriter(&buf)
	gw.Write([]byte(payload))
	gw.Close()
	want := buf.Len()
	minSeen, maxSeen := math.MaxInt32, 0
	var entries []int
	for j := 0; j < attemptsToAvg; j++ {
		minSeen, maxSeen = math.MaxInt32, 0
		tries = 0
		for {
			buf.Reset()
			gw.Reset(&buf)
			// Heal the Breach, following the paper:
			// https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9754554.
			// Essentially, we set the filename in the gzip header to a string
			// of random length.
			l, err := randomLength()
			if err != nil {
				panic(err)
			}
			gw.Header.Name = htbFilename[:l]
			gw.Write([]byte(payload))
			gw.Close()

			if buf.Len() < minSeen {
				minSeen = buf.Len()
			}
			if buf.Len() > maxSeen {
				maxSeen = buf.Len()
			}
			tries++
			if maxSeen-minSeen == maxDelta {
				got := minSeen - 2
				if want != got {
					fmt.Println("wow, so wrong, want:", want, "!= got:", got)
				}

				entries = append(entries, tries)
				//fmt.Println("Compressed size is", got, " That took", tries, "roundtrips to figure out.")
				break
			}
		}
	}
	fmt.Printf("Average over %d tries was %d\n",attemptsToAvg, avg(entries))
}

func avg(ints []int) int {
	var sum int
	for _, v := range ints {
		sum += v
	}

	return sum / len(ints)
}

// Filename for Heal the Breach. Any ASCII string will do.
const htbFilename = "" +
	"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" +
	"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

// Returns a random filename length for Heal the Breach.
// The length is uniformly drawn from [1,100]. Such lengths are accepted
// by all browsers, according to the Heal the Breach paper.
func randomLength() (int, error) {
	const maxtries = 1_000_000
	var htbBuf [1]byte
	// Rejection sampling.
	for i := 0; i < maxtries; i++ {
		n, err := rand.Reader.Read(htbBuf[:])
		if n == 0 || err != nil {
			return 0, fmt.Errorf("gzhttp: read crypto/rand: %w", err)
		}

		b := htbBuf[0]
		b >>= 1
		if b < 100 {
			return 1 + int(b), nil
		}
	}

	err := fmt.Errorf("gzhttp: no number < 100 in %d tries", maxtries)
	return 0, err
}

@greatroar
Copy link
Contributor

greatroar commented Mar 5, 2023

Understood, but the paper promises a factor 50,000 increase in the number of queries for n=100, not 150. It focuses entirely on the problem of determining the mean number of bytes added, which isn't necessary if the attacker knows how the mitigation is implemented.

@d-z-m
Copy link

d-z-m commented Mar 5, 2023

Understood, but the paper promises a factor 50,000 increase in the number of queries for n=100, not 150.

True, I'm not sure why their threat model doesn't include the attacker being able to deduce the HTB size/delta, and from there being able to develop a more efficient exploit.

@greatroar
Copy link
Contributor

What is even stranger is that they cite the SafeDeflate paper, which starts off by dismissing the idea of padding the compressed response as ineffective...

kodiakhq bot pushed a commit to cloudquery/filetypes that referenced this pull request Apr 1, 2023
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [github.com/klauspost/compress](https://togithub.com/klauspost/compress) | indirect | patch | `v1.16.0` -> `v1.16.3` |

---

### ⚠ Dependency Lookup Warnings ⚠

Warnings were logged while processing this repo. Please check the Dependency Dashboard for more information.

---

### Release Notes

<details>
<summary>klauspost/compress</summary>

### [`v1.16.3`](https://togithub.com/klauspost/compress/releases/tag/v1.16.3)

[Compare Source](https://togithub.com/klauspost/compress/compare/v1.16.2...v1.16.3)

**Full Changelog**: klauspost/compress@v1.16.2...v1.16.3

### [`v1.16.2`](https://togithub.com/klauspost/compress/releases/tag/v1.16.2)

[Compare Source](https://togithub.com/klauspost/compress/compare/v1.16.1...v1.16.2)

#### What's Changed

-   Fix Goreleaser permissions by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#777

**Full Changelog**: klauspost/compress@v1.16.1...v1.16.2

### [`v1.16.1`](https://togithub.com/klauspost/compress/releases/tag/v1.16.1)

[Compare Source](https://togithub.com/klauspost/compress/compare/v1.16.0...v1.16.1)

#### What's Changed

-   zstd: Speed up + improve best encoder by [@&#8203;greatroar](https://togithub.com/greatroar) in [klauspost/compress#776
-   s2: Add Intel LZ4s converter by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#766
-   gzhttp: Add BREACH mitigation by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#762
-   gzhttp: Remove a few unneeded allocs by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#768
-   gzhttp: Fix crypto/rand.Read usage by [@&#8203;greatroar](https://togithub.com/greatroar) in [klauspost/compress#770
-   gzhttp: Use SHA256 as paranoid option by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#769
-   gzhttp: Use strings for randomJitter to skip a copy by [@&#8203;greatroar](https://togithub.com/greatroar) in [klauspost/compress#767
-   zstd: Fix ineffective block size check by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#771
-   zstd: Check FSE init values by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#772
-   zstd: Report EOF from byteBuf.readBig by [@&#8203;greatroar](https://togithub.com/greatroar) in [klauspost/compress#773
-   huff0: Speed up compress1xDo by [@&#8203;greatroar](https://togithub.com/greatroar) in [klauspost/compress#774
-   tests: Remove fuzz printing by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#775
-   tests: Add CICD Fuzz testing by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#763
-   ci: set minimal permissions to GitHub Workflows by [@&#8203;diogoteles08](https://togithub.com/diogoteles08) in [klauspost/compress#765

#### New Contributors

-   [@&#8203;diogoteles08](https://togithub.com/diogoteles08) made their first contribution in [klauspost/compress#765

**Full Changelog**: klauspost/compress@v1.16.0...v1.16.1

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 3am on the first day of the month" (UTC), Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update again.

---

 - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box

---

This PR has been generated by [Renovate Bot](https://togithub.com/renovatebot/renovate).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNC4xNTQuMCIsInVwZGF0ZWRJblZlciI6IjM0LjE1NC4wIn0=-->
kodiakhq bot pushed a commit to cloudquery/plugin-sdk that referenced this pull request Jul 1, 2023
)

This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [github.com/klauspost/compress](https://togithub.com/klauspost/compress) | indirect | patch | `v1.16.0` -> `v1.16.6` |

---

### Release Notes

<details>
<summary>klauspost/compress (github.com/klauspost/compress)</summary>

### [`v1.16.6`](https://togithub.com/klauspost/compress/releases/tag/v1.16.6)

[Compare Source](https://togithub.com/klauspost/compress/compare/v1.16.5...v1.16.6)

#### What's Changed

-   zstd: correctly ignore WithEncoderPadding(1) by [@&#8203;ianlancetaylor](https://togithub.com/ianlancetaylor) in [klauspost/compress#806
-   gzhttp: Handle informational headers by [@&#8203;rtribotte](https://togithub.com/rtribotte) in [klauspost/compress#815
-   zstd: Add amd64 match length assembly by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#824
-   s2: Improve Better compression slightly by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#663
-   s2: Clean up matchlen assembly by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#825

#### New Contributors

-   [@&#8203;rtribotte](https://togithub.com/rtribotte) made their first contribution in [klauspost/compress#815
-   [@&#8203;dveeden](https://togithub.com/dveeden) made their first contribution in [klauspost/compress#816

**Full Changelog**: klauspost/compress@v1.16.5...v1.16.6

### [`v1.16.5`](https://togithub.com/klauspost/compress/releases/tag/v1.16.5)

[Compare Source](https://togithub.com/klauspost/compress/compare/v1.16.4...v1.16.5)

#### What's Changed

-   zstd: readByte needs to use io.ReadFull by [@&#8203;jnoxon](https://togithub.com/jnoxon) in [klauspost/compress#802
-   gzip: Fix WriterTo after initial read by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#804

#### New Contributors

-   [@&#8203;jnoxon](https://togithub.com/jnoxon) made their first contribution in [klauspost/compress#802

**Full Changelog**: klauspost/compress@v1.16.4...v1.16.5

### [`v1.16.4`](https://togithub.com/klauspost/compress/releases/tag/v1.16.4)

[Compare Source](https://togithub.com/klauspost/compress/compare/v1.16.3...v1.16.4)

#### What's Changed

-   s2: Fix huge block overflow by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#779
-   s2: Allow CustomEncoder fallback by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#780
-   zstd: Fix amd64 not always detecting corrupt data by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#785
-   zstd: Improve zstd best efficiency by [@&#8203;klauspost](https://togithub.com/klauspost) and [@&#8203;greatroar](https://togithub.com/greatroar) in [klauspost/compress#784
-   zstd: Make load(32|64)32 safer and smaller by [@&#8203;greatroar](https://togithub.com/greatroar) in [klauspost/compress#788
-   zstd: Fix quick reject on long backmatches by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#787
-   zstd: Revert table size change  by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#789
-   zstd: Respect WithAllLitEntropyCompression by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#792
-   zstd: Fix back-referenced offset by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#793
-   zstd: Load source value at start of loop by [@&#8203;greatroar](https://togithub.com/greatroar) in [klauspost/compress#794
-   zstd: Shorten checksum code by [@&#8203;greatroar](https://togithub.com/greatroar) in [klauspost/compress#795
-   zstd: Fix fallback on incompressible block by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#798
-   gzhttp: Suppport ResponseWriter Unwrap() in gzhttp handler by [@&#8203;jgimenez](https://togithub.com/jgimenez) in [klauspost/compress#799

#### New Contributors

-   [@&#8203;jgimenez](https://togithub.com/jgimenez) made their first contribution in [klauspost/compress#799

**Full Changelog**: klauspost/compress@v1.16.3...v1.16.4

### [`v1.16.3`](https://togithub.com/klauspost/compress/releases/tag/v1.16.3)

[Compare Source](https://togithub.com/klauspost/compress/compare/v1.16.2...v1.16.3)

**Full Changelog**: klauspost/compress@v1.16.2...v1.16.3

### [`v1.16.2`](https://togithub.com/klauspost/compress/releases/tag/v1.16.2)

[Compare Source](https://togithub.com/klauspost/compress/compare/v1.16.1...v1.16.2)

#### What's Changed

-   Fix Goreleaser permissions by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#777

**Full Changelog**: klauspost/compress@v1.16.1...v1.16.2

### [`v1.16.1`](https://togithub.com/klauspost/compress/releases/tag/v1.16.1)

[Compare Source](https://togithub.com/klauspost/compress/compare/v1.16.0...v1.16.1)

#### What's Changed

-   zstd: Speed up + improve best encoder by [@&#8203;greatroar](https://togithub.com/greatroar) in [klauspost/compress#776
-   s2: Add Intel LZ4s converter by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#766
-   gzhttp: Add BREACH mitigation by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#762
-   gzhttp: Remove a few unneeded allocs by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#768
-   gzhttp: Fix crypto/rand.Read usage by [@&#8203;greatroar](https://togithub.com/greatroar) in [klauspost/compress#770
-   gzhttp: Use SHA256 as paranoid option by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#769
-   gzhttp: Use strings for randomJitter to skip a copy by [@&#8203;greatroar](https://togithub.com/greatroar) in [klauspost/compress#767
-   zstd: Fix ineffective block size check by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#771
-   zstd: Check FSE init values by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#772
-   zstd: Report EOF from byteBuf.readBig by [@&#8203;greatroar](https://togithub.com/greatroar) in [klauspost/compress#773
-   huff0: Speed up compress1xDo by [@&#8203;greatroar](https://togithub.com/greatroar) in [klauspost/compress#774
-   tests: Remove fuzz printing by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#775
-   tests: Add CICD Fuzz testing by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#763
-   ci: set minimal permissions to GitHub Workflows by [@&#8203;diogoteles08](https://togithub.com/diogoteles08) in [klauspost/compress#765

#### New Contributors

-   [@&#8203;diogoteles08](https://togithub.com/diogoteles08) made their first contribution in [klauspost/compress#765

**Full Changelog**: klauspost/compress@v1.16.0...v1.16.1

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on the first day of the month" (UTC), Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update again.

---

 - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box

---

This PR has been generated by [Renovate Bot](https://togithub.com/renovatebot/renovate).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNS4xNTEuMCIsInVwZGF0ZWRJblZlciI6IjM1LjE1MS4wIiwidGFyZ2V0QnJhbmNoIjoibWFpbiJ9-->
kodiakhq bot pushed a commit to cloudquery/plugin-pb-go that referenced this pull request Aug 1, 2023
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [github.com/klauspost/compress](https://togithub.com/klauspost/compress) | indirect | minor | `v1.15.15` -> `v1.16.7` |

---

### Release Notes

<details>
<summary>klauspost/compress (github.com/klauspost/compress)</summary>

### [`v1.16.7`](https://togithub.com/klauspost/compress/releases/tag/v1.16.7)

[Compare Source](https://togithub.com/klauspost/compress/compare/v1.16.6...v1.16.7)

#### What's Changed

-   zstd: Fix default level first dictionary encode by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#829
-   docs: Fix typo in security advisory URL by [@&#8203;vcabbage](https://togithub.com/vcabbage) in [klauspost/compress#830
-   s2: add GetBufferCapacity() method by [@&#8203;GiedriusS](https://togithub.com/GiedriusS) in [klauspost/compress#832

#### New Contributors

-   [@&#8203;vcabbage](https://togithub.com/vcabbage) made their first contribution in [klauspost/compress#830
-   [@&#8203;GiedriusS](https://togithub.com/GiedriusS) made their first contribution in [klauspost/compress#832

**Full Changelog**: klauspost/compress@v1.16.6...v1.16.7

### [`v1.16.6`](https://togithub.com/klauspost/compress/releases/tag/v1.16.6)

[Compare Source](https://togithub.com/klauspost/compress/compare/v1.16.5...v1.16.6)

#### What's Changed

-   zstd: correctly ignore WithEncoderPadding(1) by [@&#8203;ianlancetaylor](https://togithub.com/ianlancetaylor) in [klauspost/compress#806
-   gzhttp: Handle informational headers by [@&#8203;rtribotte](https://togithub.com/rtribotte) in [klauspost/compress#815
-   zstd: Add amd64 match length assembly by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#824
-   s2: Improve Better compression slightly by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#663
-   s2: Clean up matchlen assembly by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#825

#### New Contributors

-   [@&#8203;rtribotte](https://togithub.com/rtribotte) made their first contribution in [klauspost/compress#815
-   [@&#8203;dveeden](https://togithub.com/dveeden) made their first contribution in [klauspost/compress#816

**Full Changelog**: klauspost/compress@v1.16.5...v1.16.6

### [`v1.16.5`](https://togithub.com/klauspost/compress/releases/tag/v1.16.5)

[Compare Source](https://togithub.com/klauspost/compress/compare/v1.16.4...v1.16.5)

#### What's Changed

-   zstd: readByte needs to use io.ReadFull by [@&#8203;jnoxon](https://togithub.com/jnoxon) in [klauspost/compress#802
-   gzip: Fix WriterTo after initial read by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#804

#### New Contributors

-   [@&#8203;jnoxon](https://togithub.com/jnoxon) made their first contribution in [klauspost/compress#802

**Full Changelog**: klauspost/compress@v1.16.4...v1.16.5

### [`v1.16.4`](https://togithub.com/klauspost/compress/releases/tag/v1.16.4)

[Compare Source](https://togithub.com/klauspost/compress/compare/v1.16.3...v1.16.4)

#### What's Changed

-   s2: Fix huge block overflow by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#779
-   s2: Allow CustomEncoder fallback by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#780
-   zstd: Fix amd64 not always detecting corrupt data by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#785
-   zstd: Improve zstd best efficiency by [@&#8203;klauspost](https://togithub.com/klauspost) and [@&#8203;greatroar](https://togithub.com/greatroar) in [klauspost/compress#784
-   zstd: Make load(32|64)32 safer and smaller by [@&#8203;greatroar](https://togithub.com/greatroar) in [klauspost/compress#788
-   zstd: Fix quick reject on long backmatches by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#787
-   zstd: Revert table size change  by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#789
-   zstd: Respect WithAllLitEntropyCompression by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#792
-   zstd: Fix back-referenced offset by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#793
-   zstd: Load source value at start of loop by [@&#8203;greatroar](https://togithub.com/greatroar) in [klauspost/compress#794
-   zstd: Shorten checksum code by [@&#8203;greatroar](https://togithub.com/greatroar) in [klauspost/compress#795
-   zstd: Fix fallback on incompressible block by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#798
-   gzhttp: Suppport ResponseWriter Unwrap() in gzhttp handler by [@&#8203;jgimenez](https://togithub.com/jgimenez) in [klauspost/compress#799

#### New Contributors

-   [@&#8203;jgimenez](https://togithub.com/jgimenez) made their first contribution in [klauspost/compress#799

**Full Changelog**: klauspost/compress@v1.16.3...v1.16.4

### [`v1.16.3`](https://togithub.com/klauspost/compress/releases/tag/v1.16.3)

[Compare Source](https://togithub.com/klauspost/compress/compare/v1.16.2...v1.16.3)

**Full Changelog**: klauspost/compress@v1.16.2...v1.16.3

### [`v1.16.2`](https://togithub.com/klauspost/compress/releases/tag/v1.16.2)

[Compare Source](https://togithub.com/klauspost/compress/compare/v1.16.1...v1.16.2)

#### What's Changed

-   Fix Goreleaser permissions by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#777

**Full Changelog**: klauspost/compress@v1.16.1...v1.16.2

### [`v1.16.1`](https://togithub.com/klauspost/compress/releases/tag/v1.16.1)

[Compare Source](https://togithub.com/klauspost/compress/compare/v1.16.0...v1.16.1)

#### What's Changed

-   zstd: Speed up + improve best encoder by [@&#8203;greatroar](https://togithub.com/greatroar) in [klauspost/compress#776
-   s2: Add Intel LZ4s converter by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#766
-   gzhttp: Add BREACH mitigation by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#762
-   gzhttp: Remove a few unneeded allocs by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#768
-   gzhttp: Fix crypto/rand.Read usage by [@&#8203;greatroar](https://togithub.com/greatroar) in [klauspost/compress#770
-   gzhttp: Use SHA256 as paranoid option by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#769
-   gzhttp: Use strings for randomJitter to skip a copy by [@&#8203;greatroar](https://togithub.com/greatroar) in [klauspost/compress#767
-   zstd: Fix ineffective block size check by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#771
-   zstd: Check FSE init values by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#772
-   zstd: Report EOF from byteBuf.readBig by [@&#8203;greatroar](https://togithub.com/greatroar) in [klauspost/compress#773
-   huff0: Speed up compress1xDo by [@&#8203;greatroar](https://togithub.com/greatroar) in [klauspost/compress#774
-   tests: Remove fuzz printing by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#775
-   tests: Add CICD Fuzz testing by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#763
-   ci: set minimal permissions to GitHub Workflows by [@&#8203;diogoteles08](https://togithub.com/diogoteles08) in [klauspost/compress#765

#### New Contributors

-   [@&#8203;diogoteles08](https://togithub.com/diogoteles08) made their first contribution in [klauspost/compress#765

**Full Changelog**: klauspost/compress@v1.16.0...v1.16.1

### [`v1.16.0`](https://togithub.com/klauspost/compress/releases/tag/v1.16.0)

[Compare Source](https://togithub.com/klauspost/compress/compare/v1.15.15...v1.16.0)

#### What's Changed

-   s2: Add Dictionary support by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#685
-   s2: Add Compression Size Estimate by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#752
-   s2: Add support for custom stream encoder by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#755
-   s2: Add LZ4 block converter by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#748
-   s2: Support io.ReaderAt in ReadSeeker by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#747
-   s2c/s2sx: Use concurrent decoding by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#746
-   tests: Upgrade to Go 1.20 by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#749
-   Update all (command) dependencies by [@&#8203;klauspost](https://togithub.com/klauspost) in [klauspost/compress#758

**Full Changelog**: klauspost/compress@v1.15.15...v1.16.0

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 4am on the first day of the month" (UTC), Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update again.

---

 - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box

---

This PR has been generated by [Renovate Bot](https://togithub.com/renovatebot/renovate).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNi4yNi4xIiwidXBkYXRlZEluVmVyIjoiMzYuMjYuMSIsInRhcmdldEJyYW5jaCI6Im1haW4ifQ==-->
@@ -112,7 +123,7 @@ func (w *GzipResponseWriter) Write(b []byte) (int, error) {
ct := hdr.Get(contentType)
if cl == 0 || cl >= w.minSize && (ct == "" || w.contentTypeFilter(ct)) {
// If the current buffer is less than minSize and a Content-Length isn't set, then wait until we have more data.
if len(w.buf) < w.minSize && cl == 0 {
if len(w.buf) < w.minSize && cl == 0 || (w.jitterBuffer > 0 && len(w.buf) < w.jitterBuffer) {
Copy link

@adriansmares adriansmares Nov 9, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is the insertion of the content type header conditional on the entropy available for the jitter ?

Responses of size smaller than jitterBuffer never seem to actually get a Content-Type header in my tests. As a note, I am quite sure that the underlying handlers are not guaranteed to call Flush, so I don't see how the Content-Type would be set in such low-size scenarios (above the minimum size, but below jitterBuffer).

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@adriansmares Please open an issue with a reproducer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants