Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support rate-limiting #1006

Open
olix0r opened this issue Jan 27, 2017 · 5 comments
Open

Support rate-limiting #1006

olix0r opened this issue Jan 27, 2017 · 5 comments

Comments

@olix0r
Copy link
Member

olix0r commented Jan 27, 2017

Service owners typically want some ability to enforce per-client rate-limits. Expose a plugin interface that is able to enforce rate-limiting.

@andrejvanderzee
Copy link

This would be useful. Is there any way yet how to enforce global or per-client rate limiting?

@wmorgan
Copy link
Member

wmorgan commented Jul 11, 2017

@andrejvanderzee not explicitly. Circuit breaking and latency-aware load balancing will kick in as instances slow down / start failing under load, but setting an explicit limit is still on the roadmap. Watch this ticket for updates, though!

@leozc
Copy link
Contributor

leozc commented Jan 28, 2018

Any plan on this ? Rate limiting per service/api level would be a very useful feature

@wmorgan
Copy link
Member

wmorgan commented Jan 28, 2018

@leozc status quo for now. PRs welcome though :)

@leozc
Copy link
Contributor

leozc commented Jan 28, 2018

"per-client rate-limit" is a global limit - aka assume we have n boxes - each client on a box is 1/n of the quota.

Can we reuse the storage model/interface in Dtab storage?

Tim-Brooks pushed a commit to Tim-Brooks/linkerd that referenced this issue Dec 20, 2018
Depends on tower-rs/tower#75. Required for linkerd#386

In order for the proxy to use the TLS support metadata from the Destination 
service correctly, we determined that the code for dynamically changing the
labels on an already-bound service should be removed, and any change in
metadata should cause an endpoint to be rebound.

I've modified the proxy so that we no longer update the labels using 
`futures-watch` (as a sidenote, we no longer depend on that crate). Metadata
update events now cause the `tower-discover::Discover` implementation for 
`DestinationSet` to re-insert the changed endpoint into the load balancer.
Upstream PR tower-rs/tower#75 in tower-balance changes the load balancer 
to honor duplicate insertions by replacing the old endpoint rather than 
ignoring them; that change is necessary for the tests to pass on this branch.

Signed-off-by: Eliza Weisman <[email protected]>
Tim-Brooks pushed a commit to Tim-Brooks/linkerd that referenced this issue Dec 20, 2018
…Connect::new` (linkerd#1008)

Depends on linkerd#1006. Depends on linkerd#1041.

This PR adds a `tls_identity` field to the endpoint `Metadata` struct, which
contains the `TlsIdentity` metadata sent by the control plane's Destination
service. 

I changed the `ctx::transport::Client` context struct to hold a `Metadata`,
rather than just the labels, so the TLS support determination is always
available. In addition, I've added it as an additional parameter to 
`transport::Connect::new`, so that when we create a new connection, the TLS
code will be able to determine whether or not TLS is supported and, if it is, 
how to verify the endpoint's identity.

Signed-off-by: Eliza Weisman <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Development

No branches or pull requests

4 participants