-
Notifications
You must be signed in to change notification settings - Fork 504
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Zipkin tracing format update #1860
Comments
Hey @zsojma -- thanks for reporting this. I had previously opened #1132 to track changing the long span names to something more intelligible, but we ultimately decided against it. More context about that here: #1135 (comment) The underlying issue is that the client span name corresponds to the id used by linkerd to label the clients that it builds dynamically. If we strip out parts of the id, then they become ambiguous. But I totally agree that in a kubernetes setup, where multiple transformers and namers are in use, the ids become unwieldy. Maybe we should think about a kubernetes-specific fix like you suggested, since the issue is most pronounced in that environment. |
@klingerf This sounds like you're deciding on a user-facing feature based on the internal implementation detail. You can still keep the unique IDs internally, and associate them with a normalized structure like
tracing systems can easily handle high dimension metadata, and can actually provide better / more interesting aggregations when metadata is provided in a normalized form instead of a single denormalized string. |
@yurishkuro That sounds totally reasonable. I haven't looked at Finagle's trace implementation in a while. Do you know if it's possible to set span names to the normalized structure like you suggest? I seem to remember that it required strings for span names. |
Haven't looked at finagle in a couple of years, but fairly certain span names are plain strings. I'm not suggesting that they should be structs, rather that the rest of the dimensions can be recorded as span tags (finagle might call them binary annotations, following zipkin notation). |
…ume it (linkerd#1860) Add a routes command which displays per-route stats for services that have service profiles defined. This change has three parts: * A new public-api RPC called `TopRoutes` which serves per-route stat data about a service * An implementation of TopRoutes in the public-api service. This implementation reads per-route data from Prometheus. This is very similar to how the StatSummaries RPC and much of the code was able to be refactored and shared. * A new CLI command called `routes` which displays the per-route data in a tabular or json format. This is very similar to the `stat` command and much of the code was able to be refactored and shared. Note that as of the currently targeted proxy version, only outbound route stats are supported so the `--from` flag must be included in order to see data. This restriction will be lifted in an upcoming change once we add support for inbound route stats as well. Signed-off-by: Alex Leong <[email protected]>
Issue Type:
What happened:
Hello, when a trace is logged into Zipkin (we actually use Jaeger https://github.com/jaegertracing, but there should be no difference related to this issue - it is backward compatible with zipkin), linkerd logs names of services in following complex format (we are using
io.zipkin.http
telemeter):%/io.l5d.k8s.daemonset/l5d-system/http-incoming/l5d/#/io.l5d.k8s.http/k2-development/http/orchestration
%/io.l5d.k8s.localnode/k8s-master-1.alz.lcl/#/io.l5d.k8s/k2-development/http/orchestration
- differs for each node, which is here logged ask8s-master-1.alz.lcl
We currently have kubernetes cluster with 6 nodes and something about 20 applications deployed (it will be more in future), which results to a state, where we see something around 140 (20 + 6 * 20) different services logged in the tracer:
And this makes it hard to locate all traces for a concrete deployed application, because we need to choose between incomming/outgoing requests and between nodes. A concrete trace looks like the following now:
What you expected to happen:
It would be nice, if linkerd could have an option to log only names of concrete k8s services with no other related information. Something like the following (related to previous picture):
Where
k2-development
is name of a k8s namespace,http
/grpc
are names of ports defined by k8s service andorchestration
etc. are names of k8s services. Other related information (like name of a router%/io.l5d.k8s.daemonset/l5d-system/grpc-incoming/l5d/#/io.l5d.k8s.grpc/
) would be logged as either span tag or Process tag.What do you think? Thanks
Environment:
The text was updated successfully, but these errors were encountered: