-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
current rate limiting is ineffective #4359
Comments
We actually can make the API more general by using a function, and not a RejectUnverifiedConnection(rcvTime time.Time) bool
RejectConnection(rcvTime time.Time) bool If unset, we’d use the It is expected that most users would use a |
Here's an updated proposal. We actually only need a single callback: // VerifySourceAddress decides if a connection attempt originating from unvalidated source
// addresses first needs to go through source address validation using QUIC's Retry mechanism,
// as described in RFC 9000 section 8.1.2.
// Validating the source address adds one additional network roundtrip to the handshake,
// and should therefore only be used if a suspiciously high number of incoming connection is recorded.
// If unset, a rate limit of 128 unvalidated connection attempts per second will be applied.
// For most use cases, wrapping the AllowN function of a rate.Limiter will be a reasonable
// implementation of this callback.
VerifySourceAddress func(receiveTime time.Time) bool Question: Is it reasonable to set a default value here? It doesn't make sense to have a separate
That means that a reasonable implementation would look something like this: limiter := rate.NewLimiter(rate, burst)
quicConf := &quic.Config{
GetConfigForClient: func(info *ClientHelloInfo) (*quic.Config, error) {
if !limiter.Allow() {
return errors.New("rate limited")
}
if info.AddrVerified {
// do IP-address based rate limiting based on info.RemoteAddr
}
return &quic.Config{} /* your config here */, nil
},
} |
I like the API as a greenfield design. I'm worried that reusing Does it make sense to allow verified IP addresses to have their own rate limit? If another node went through the process of verifying their address, should they still get blocked if a malicious actor is consuming the whole rate limit? I'm thinking something like this: unverifiedLimiter := rate.NewLimiter(rate, burst)
verifiedLimiter := rate.NewLimiter(rate, burst)
quicConf := &quic.Config{
GetConfigForClient: func(info *ClientHelloInfo) (*quic.Config, error) {
if info.AddrVerified {
if !verifiedLimiter.Allow() {
return errors.New("rate limited")
}
// do IP-address based rate limiting based on info.RemoteAddr
} else if !unverifiedLimiter.Allow() {
return errors.New("rate limited")
}
return &quic.Config{} /* your config here */, nil
},
} |
Thanks! I really appreciate it that you're evaluating the tradeoffs here.
I think the flow is a little bit different here, since it only takes 1 RTT to convert an unverified into a verified client. Here's how I imagine the layers of defence to work:
Does this make sense, @MarcoPolo? |
Do you think it would be useful to provide the source address here:
|
How would you act upon the source address here? It's not verified yet, so you'd be using spoofed information, right in the mechanism designed to prevent this spoofing. |
I am not sure how useful this is in the wild, but you can whitelist intranet addresses when you know the packets from outside will be rejected at ELB/Gateway etc if they present a source IP as the intranet IP. |
Good point. We just need to be very clear that the source address cannot be trusted. |
Yeah, that makes sense. I think I need to figure out an ergonomic way to define rate limits for libp2p users. |
The current rate limiting only works for well-behaved clients, but fails with an active attacker. The reason is that the current implementation imposes a concurrent limit. However, the concurrency is attacker-controlled: An attacker can send a ClientHello, immediately followed by an invalid frame (potentially in the same packet). The server would then start the TLS handshake and incur the CPU cost for that, only to immediately close the connection, which would then not be counted towards the limit anymore.
What we need to do instead is impose a limit that’s independent of what the attacker is doing. One option would be a leaky-bucket-like rate limiting, enforcing that we’re not handling more than X (unvalidated / total) handshakes per second, with a burst budget of Y.
x/time/rate
contains a leaky-bucket rate limiter, which we might be able to use in the API: https://pkg.go.dev/golang.org/x/time/rateThe text was updated successfully, but these errors were encountered: