Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Linkerd failed to push trace to Zipkin after re-created Zipkin service in Kubernetes #1654

Open
liangrog opened this issue Sep 23, 2017 · 0 comments

Comments

@liangrog
Copy link

liangrog commented Sep 23, 2017

Bug report

If delete Zipkin service in Kubernetes then re-create it again, Linkerd doesn't seem to be able to reach the new service. Suspect DNS caching stale records.

Environment

  • linkerd 1.2.1
  • Zipkin 1.20
  • Kubernetes 1.7.2 + RBAC
  • Networking: Canal
  • AWS

To reproduce the bug

  1. Create the linkerd and zipkin service using below config
  2. Delete zipkin service
  3. Create zipkin service

Linkerd config:

 # RBAC configs for linkerd
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: linkerd-endpoints-reader
  namespace: linkerd-bug
rules:
  # "" indicates the core API group
  - apiGroups: [""]
    resources: ["endpoints", "services", "pods"] 
    verbs: ["get", "watch", "list"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: linkerd-role-binding
  namespace: linkerd-bug
subjects:
  - kind: ServiceAccount
    name: default
    namespace: linkerd-bug
roleRef:
  kind: ClusterRole
  name: linkerd-endpoints-reader
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: linkerd-config
  namespace: linkerd-bug
data:
  config.yaml: |-
    admin:
      port: 9990

    telemetry:
    - kind: io.l5d.prometheus 
    - kind: io.l5d.zipkin 
      host: zipkin-collector.linkerd-bug.svc.cluster.local
      port: 9410
      sampleRate: 1.0 

    namers:
    - kind: io.l5d.k8s
    - kind: io.l5d.k8s
      prefix: /io.l5d.k8s.http
      transformers:
      - kind: io.l5d.k8s.daemonset
        namespace: linkerd
        port: http-incoming
        service: l5d
        hostNetwork: true
    - kind: io.l5d.rewrite
      prefix: /portNsSvcToK8s
      pattern: "/{port}/{ns}/{svc}"
      name: "/k8s/{ns}/{port}/{svc}"

    routers:
    - label: http-outgoing
      protocol: http
      servers:
      - port: 4140
        ip: 0.0.0.0
      dtab: |
        /ph  => /$/io.buoyant.rinet;
        /svc => /ph/80;
        /svc => /$/io.buoyant.porthostPfx/ph;
        /k8s => /#/io.l5d.k8s.http; 
        /portNsSvc => /#/portNsSvcToK8s;
        /host => /portNsSvc/http/default;
        /host => /portNsSvc/http;
        /svc => /$/io.buoyant.http.domainToPathPfx/host;
      client:
        kind: io.l5d.static
        configs:
        - prefix: "/$/io.buoyant.rinet/443/{service}"
          tls:
            commonName: "{service}"

    - label: http-incoming
      protocol: http
      servers:
      - port: 4141
        ip: 0.0.0.0
      interpreter:
        kind: default
        transformers:
        - kind: io.l5d.k8s.localnode
          hostNetwork: true
      dtab: |
        /k8s => /#/io.l5d.k8s;
        /portNsSvc => /#/portNsSvcToK8s;
        /host => /portNsSvc/http/default;
        /host => /portNsSvc/http;
        /svc => /$/io.buoyant.http.domainToPathPfx/host;

---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    app: linkerd
  name: linkerd
  namespace: linkerd-bug
spec:
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: linkerd
    spec:
      dnsPolicy: ClusterFirstWithHostNet
      hostNetwork: true 
      volumes:
      - name: linkerd-config
        configMap:
          name: "linkerd-config"
      containers:
      - name: linkerd
        image: buoyantio/linkerd:1.2.1
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        args:
        - /io.buoyant/linkerd/config/config.yaml
        ports:
        - name: http-outgoing
          containerPort: 4140
          hostPort: 4140
        - name: http-incoming
          containerPort: 4141
        volumeMounts:
        - name: "linkerd-config"
          mountPath: "/io.buoyant/linkerd/config"
          readOnly: true

      # Run `kubectl proxy` as a sidecar to give us authenticated access to the
      # Kubernetes API.
      - name: kubectl
        image: buoyantio/kubectl:v1.6.2
        args:
        - "proxy"
        - "-p"
        - "8001"
---
apiVersion: v1
kind: Service
metadata:
  name: linkerd
  namespace: linkerd-bug
spec:
  selector:
    app: linkerd
  type: LoadBalancer
  ports:
  - name: http-outgoing
    port: 4140
  - name: http-incoming
    port: 4141

Zipkin config:

---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: zipkin
  namespace: linkerd-bug
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zipkin
  template:
    metadata:
      name: zipkin
      labels:
        app: zipkin
    spec:
      containers:
      - name: zipkin
        image: openzipkin/zipkin:1.20
        env:
        - name: SCRIBE_ENABLED
          value: "true"
        ports:
        - name: scribe
          containerPort: 9410
        - name: http
          containerPort: 9411

---
apiVersion: v1
kind: Service
metadata:
  labels:
    name: zipkin-collector
  name: zipkin-collector
  namespace: linkerd-bug
spec:
  type: ClusterIP
  selector:
    app: zipkin
  ports:
  - name: scribe
    port: 9410
    targetPort: 9410

---
apiVersion: v1
kind: Service
metadata:
  labels:
    name: zipkin
  name: zipkin
  namespace: linkerd-bug
spec:
  type: LoadBalancer
  selector:
    app: zipkin
  ports:
    - name: http
      port: 80
      targetPort: 9411
@klingerf klingerf removed their assignment Feb 15, 2018
Tim-Brooks pushed a commit to Tim-Brooks/linkerd that referenced this issue Dec 20, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants