-
Notifications
You must be signed in to change notification settings - Fork 9.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Connection pool biased towards slow connections #8244
Comments
Thanks for the detailed analysis. This is really useful. Possibly similar to #1397 There is a larger feature request #4530 for HTTP/2, where we "aggressively tries to deduplicate connections to the same destination". I'd be curious if there is a smaller API that let's us experiment with these strategies, before we solve it completely. but I can't promise anything. |
Can I assume you are on HTTP/1.1? |
Correct, we're on HTTP/1.1. |
In the short term, what might be viable is to explore some strictly internal strategy API with the default version implementing what we have now. Something we could test different strategies. |
I love this analysis. Thanks! |
This paper on Metastable Failures in Distributed Systems has an excellent war-story about connection pooling logic that unintentionally favors slow connections:
Quoting from section 2.4 on Link Imbalance:
We experienced something similar using okhttp as a client, where disabling the okhttp connection pooling completely around 15:20 had a very obvious effect on load distribution. The metric is from the downstream server's perspective:
Our setup involves kube-proxy, which provides a virtual IP and round robins TCP connections to pods that are part of the service, i.e. all okhttp knows is that there's a single IP. The nature of the service is that some requests are naturally slow, while most are fast, i.e. there's an expected significant difference between p95 and p50.
Before ultimately disabling connection pooling (as a stop-gap, and to confirm our suspicions), we reduced the keep alive timeout, but not as aggressively as shared in this post, with a similar problem with okhttp's connection pooler: Kubernetes network load balancing using OkHttp client That had some effect, but the bias issue remained:
When the pool selects a connection, it picks the first available connection. The idle connection cleanup removes the idle connections that have been used the least.
Our theory is this: When cleanup happens, the connections to slow pods are more likely to be active, and thus not eligible for cleanup. Thus, over time, their connections will gravitate towards the front of the queue, and thus be selected increasingly frequently, making them even slower, and even more likely to be selected – much like the story from the linked paper.
Thanks a lot for making okhttp, and sorry I don't have a running test, but wanted to at least share our suspicions.
The text was updated successfully, but these errors were encountered: