Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl --server-side apply replaces the live manifest instead of merging when migrating from clinet side apply to server side apply #125403

Closed
Mangaal opened this issue Jun 10, 2024 · 13 comments
Assignees
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@Mangaal
Copy link

Mangaal commented Jun 10, 2024

What happened?

If a resource is created initially with kubectl client-side apply and tries to update it with server-side apply, the live manifest got replaced by the applied manifest and the missing fields got removed, instead of merging with kubectl version v1.29.2. This was working with a lower version(v1.22.0) of kubectl

What did you expect to happen?

I expected the live manifest and the applied manifest to be merged instead of replacing.

How can we reproduce it (as minimally and precisely as possible)?

  1. kubectl create namespace kiali-test to create a testing namespace.
  2. kubectl apply -f https://raw.githubusercontent.com/SebastianJ91/argocd-server-side-apply/main/1-manual/1-kiali.io_kialis.yaml to install the CRD.
  3. kubectl apply -f https://raw.githubusercontent.com/SebastianJ91/argocd-server-side-apply/main/1-manual/2-kiali.io_v1alpha1_kiali.yaml to install a specific Kiali instance manifest.
  4. kubectl apply --server-side=true -f https://raw.githubusercontent.com/SebastianJ91/argocd-server-side-apply/main/2-argocd-application/kiali.yaml

Anything else we need to know?

No response

Kubernetes version

$ kubectl version
# Client Version: v1.30.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.2

Cloud provider

kind cluster on local machines

OS version

# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here

# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here

Install tools

Container runtime (CRI) and version (if applicable)

Related plugins (CNI, CSI, ...) and versions (if applicable)

@Mangaal Mangaal added the kind/bug Categorizes issue or PR as related to a bug. label Jun 10, 2024
@k8s-ci-robot k8s-ci-robot added needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jun 10, 2024
@tamilselvan1102
Copy link

/sig api-machinery

@k8s-ci-robot k8s-ci-robot added sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jun 10, 2024
@fedebongio
Copy link
Contributor

/assign @alexzielenski @apelisse
/cc @jpbetz
/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jun 11, 2024
@Mangaal
Copy link
Author

Mangaal commented Jun 18, 2024

As outlined in the documentation (merge strategy and upgrading from client-side apply to server-side apply). According to my understanding, fields previously managed by client-side apply are expected to migrate to server-side apply. In cases where there's no other controller or manager managing these fields, they might be replaced by the new configuration applied via server-side apply.

If my understanding is correct, this behavior isn't a bug but rather an intended consequence of the transition process.

However, this migration behavior seems to be causing issues in ArgoCD after upgrading the Kubernetes and kubectl dependencies to v0.29.6, as discussed in ArgoCD issue #17882.
/help

@k8s-ci-robot
Copy link
Contributor

@Mangaal:
This request has been marked as needing help from a contributor.

Guidelines

Please ensure that the issue body includes answers to the following questions:

  • Why are we solving this issue?
  • To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
  • Does this issue have zero to low barrier of entry?
  • How can the assignee reach out to you for help?

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Jun 18, 2024
@alexzielenski
Copy link
Contributor

This is intended behavior of kubectl. It was a bug fixed relatively recently: #112905.

Mixing client-side-apply and server-side-apply is not supported. If you would like to manually manage a portion of your resource while ArgoCD manages another portion then please use SSA in your initial commands by adding --server-side=true and --field-manager=<DifferentFieldManager> for your manual application. If both manifests are applied using SSA and different field managers then they will complement each other as desired.

@Mangaal
Copy link
Author

Mangaal commented Jun 18, 2024

Thanks, @alexzielenski, for the confirmation.

@Mangaal Mangaal closed this as completed Jun 18, 2024
@Mangaal
Copy link
Author

Mangaal commented Jun 18, 2024

Closing this issue

@SebastianJ91
Copy link

SebastianJ91 commented Jun 19, 2024

It seams like --dry-run=server is not working correctly for that case.

A dry-run with kubectl apply -o yaml --dry-run=server --server-side=true -f https://raw.githubusercontent.com/SebastianJ91/argocd-server-side-apply/main/2-argocd-application/kiali.yaml is previewing a merged manifest still containing the client side created fields.

Applying the manifest with kubectl apply -o yaml --server-side=true -f https://raw.githubusercontent.com/SebastianJ91/argocd-server-side-apply/main/2-argocd-application/kiali.yaml deletes previous client-side created fields.

So you never can be sure, how the resulting manifest will look.

This problem is reflected in the ArgoCD diff-View.

Should this be tracked here and the ticket be reopened or in another new issue?

@alexzielenski
Copy link
Contributor

alexzielenski commented Jun 19, 2024

You must supply a different field manager note the --field-manager argument:

kubectl create namespace kiali-test     
namespace/kiali-test createdkubectl apply -f https://raw.githubusercontent.com/SebastianJ91/argocd-server-side-apply/main/1-manual/1-kiali.io_kialis.yaml
customresourcedefinition.apiextensions.k8s.io/kialis.kiali.io createdkubectl apply -f https://raw.githubusercontent.com/SebastianJ91/argocd-server-side-apply/main/1-manual/2-kiali.io_v1alpha1_kiali.yaml --server-side=true --field-manager=manual
kiali.kiali.io/kiali serverside-appliedkubectl apply --server-side=true -f https://raw.githubusercontent.com/SebastianJ91/argocd-server-side-apply/main/2-argocd-application/kiali.yaml --server-side=true --field-manager=argocd
error: Apply failed with 1 conflict: conflict with "manual": .spec.istio_labels.app_label_name
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
  current managers.
* You may co-own fields by updating your manifest to match the existing
  value; in this case, you'll become the manager if the other manager(s)
  stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflictskubectl apply --server-side=true -f https://raw.githubusercontent.com/SebastianJ91/argocd-server-side-apply/main/2-argocd-application/kiali.yaml --server-side=true --field-manager=argocd --force-conflicts
kiali.kiali.io/kiali serverside-appliedkubectl get kialis -n kiali-test -o yaml   
apiVersion: v1
items:
- apiVersion: kiali.io/v1alpha1
  kind: Kiali
  metadata:
    annotations:
      ansible.sdk.operatorframework.io/verbosity: "1"
      argocd.argoproj.io/sync-options: ServerSideApply=true,Validate=false,SkipDryRunOnMissingRessource=true
    creationTimestamp: "2024-06-19T14:33:06Z"
    generation: 2
    name: kiali
    namespace: kiali-test
    resourceVersion: "52382605"
    uid: 863566c8-3484-41d5-873c-5126678d9789
  spec:
    additional_display_details:
    - annotation: kiali.io/api-spec
      icon_annotation: kiali.io/api-type
      title: API Documentation
    api:
      namespaces:
        exclude:
        - ^istio-operator
        - ^kube-.*
        - ^openshift.*
        - ^ibm.*
        - ^kiali-operator
        include: []
        label_selector_exclude: ""
        label_selector_include: kiali.io/member-of=istio-system
    auth:
      openid:
        additional_request_params:
          openIdReqParam: openIdReqParamValue
        allowed_domains:
        - allowed.domain
        api_proxy: ""
        api_proxy_ca_data: ""
        api_token: id_token
        authentication_timeout: 300
        authorization_endpoint: ""
        client_id: ""
        disable_rbac: false
        http_proxy: ""
        https_proxy: ""
        insecure_skip_verify_tls: false
        issuer_uri: ""
        scopes:
        - openid
        - profile
        - email
        username_claim: sub
      openshift:
        auth_timeout: 10
        client_id_prefix: kiali
      strategy: ""
    clustering:
      autodetect_secrets:
        enabled: true
        label: kiali.io/multiCluster=true
      clusters: []
      kiali_urls: []
    custom_dashboards:
    - name: envoy
    deployment:
      accessible_namespaces:
      - my-mesh.*
      additional_service_yaml:
        externalName: kiali.example.com
      affinity:
        node:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/e2e-az-name
                operator: In
                values:
                - e2e-az1
                - e2e-az2
        pod:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: security
                operator: In
                values:
                - S1
            topologyKey: topology.kubernetes.io/zone
        pod_anti:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: security
                  operator: In
                  values:
                  - S2
              topologyKey: topology.kubernetes.io/zone
            weight: 100
      cluster_wide_access: false
      configmap_annotations:
        strategy.spinnaker.io/versioned: "false"
      custom_secrets:
      - mount: /a-custom-secret-path
        name: a-custom-secret
        optional: true
      - csi:
          driver: secrets-store.csi.k8s.io
          readOnly: true
          volumeAttributes:
            secretProviderClass: kiali-secretprovider
        mount: /a-csi-secret-path
        name: a-csi-secret
      host_aliases:
      - hostnames:
        - foo.local
        - bar.local
        ip: 192.168.1.100
      hpa:
        api_version: ""
        spec:
          maxReplicas: 2
          metrics:
          - resource:
              name: cpu
              target:
                averageUtilization: 50
                type: Utilization
            type: Resource
          minReplicas: 1
      image_digest: ""
      image_name: ""
      image_pull_policy: IfNotPresent
      image_pull_secrets:
      - image.pull.secret
      image_version: ""
      ingress:
        additional_labels:
          ingressAdditionalLabel: ingressAdditionalLabelValue
        class_name: nginx
        enabled: false
        override_yaml:
          metadata:
            annotations:
              nginx.ingress.kubernetes.io/backend-protocol: HTTPS
              nginx.ingress.kubernetes.io/secure-backends: "true"
          spec:
            rules:
            - http:
                paths:
                - backend:
                    service:
                      name: kiali
                      port:
                        number: 20001
                  path: /kiali
                  pathType: Prefix
      instance_name: kiali
      logger:
        log_format: text
        log_level: info
        sampler_rate: "1"
        time_field_format: 2006-01-02T15:04:05Z07:00
      namespace: istio-system
      node_selector:
        nodeSelector: nodeSelectorValue
      pod_annotations:
        podAnnotation: podAnnotationValue
      pod_labels:
        sidecar.istio.io/inject: "true"
      priority_class_name: ""
      replicas: 1
      resources:
        limits:
          memory: 1Gi
        requests:
          cpu: 10m
          memory: 64Mi
      secret_name: kiali
      security_context: {}
      service_annotations:
        svcAnnotation: svcAnnotationValue
      service_type: NodePort
      tolerations:
      - effect: NoSchedule
        key: example-key
        operator: Exists
      version_label: ""
      view_only_mode: false
    external_services:
      custom_dashboards:
        discovery_auto_threshold: 10
        discovery_enabled: auto
        enabled: true
        is_core: false
        namespace_label: namespace
        prometheus:
          auth:
            ca_file: ""
            insecure_skip_verify: false
            password: ""
            token: ""
            type: none
            use_kiali_token: false
            username: ""
          cache_duration: 10
          cache_enabled: true
          cache_expiration: 300
          custom_headers:
            customHeader1: customHeader1Value
          health_check_url: ""
          is_core: true
          query_scope:
            cluster: cluster-east
            mesh_id: mesh-1
          thanos_proxy:
            enabled: false
            retention_period: 7d
            scrape_interval: 30s
          url: ""
      grafana:
        auth:
          ca_file: ""
          insecure_skip_verify: false
          password: ""
          token: ""
          type: none
          use_kiali_token: false
          username: ""
        dashboards:
        - name: Istio Service Dashboard
          variables:
            namespace: var-namespace
            service: var-service
        - name: Istio Workload Dashboard
          variables:
            namespace: var-namespace
            workload: var-workload
        - name: Istio Mesh Dashboard
        - name: Istio Control Plane Dashboard
        - name: Istio Performance Dashboard
        - name: Istio Wasm Extension Dashboard
        enabled: true
        health_check_url: ""
        in_cluster_url: ""
        is_core: false
        url: ""
      istio:
        component_status:
          components:
          - app_label: istiod
            is_core: true
            is_proxy: false
          - app_label: istio-ingressgateway
            is_core: true
            is_proxy: true
            namespace: istio-system
          - app_label: istio-egressgateway
            is_core: false
            is_proxy: true
            namespace: istio-system
          enabled: true
        config_map_name: istio
        envoy_admin_local_port: 15000
        gateway_api_classes: []
        istio_api_enabled: true
        istio_canary_revision:
          current: 1-9-9
          upgrade: 1-10-2
        istio_identity_domain: svc.cluster.local
        istio_injection_annotation: sidecar.istio.io/inject
        istio_sidecar_annotation: sidecar.istio.io/status
        istio_sidecar_injector_config_map_name: istio-sidecar-injector
        istiod_deployment_name: istiod
        istiod_pod_monitoring_port: 15014
        root_namespace: ""
        url_service_version: ""
      prometheus:
        auth:
          ca_file: ""
          insecure_skip_verify: false
          password: ""
          token: ""
          type: none
          use_kiali_token: false
          username: ""
        cache_duration: 10
        cache_enabled: true
        cache_expiration: 300
        custom_headers:
          customHeader1: customHeader1Value
        health_check_url: ""
        is_core: true
        query_scope:
          cluster: cluster-east
          mesh_id: mesh-1
        thanos_proxy:
          enabled: false
          retention_period: 7d
          scrape_interval: 30s
        url: ""
      tracing:
        auth:
          ca_file: ""
          insecure_skip_verify: false
          password: ""
          token: ""
          type: none
          use_kiali_token: false
          username: ""
        enabled: true
        grpc_port: 9095
        health_check_url: ""
        in_cluster_url: ""
        is_core: false
        namespace_selector: true
        provider: jaeger
        query_scope:
          cluster: cluster-east
          mesh_id: mesh-1
        query_timeout: 5
        tempo_config:
          datasource_uid: ""
          org_id: ""
        url: ""
        use_grpc: true
        whitelist_istio_system:
        - jaeger-query
        - istio-ingressgateway
    health_config:
      rate:
      - kind: .*
        name: .*
        namespace: .*
        tolerance:
        - code: '[1234]00'
          degraded: 5
          direction: .*
          failure: 10
          protocol: http
    identity:
      cert_file: ""
      private_key_file: ""
    installation_tag: ""
    istio_labels:
      app_label_name: service.istio-io/canonical-name
      injection_label_name: istio-injection
      injection_label_rev: istio.io/rev
      version_label_name: version
    istio_namespace: ""
    kiali_feature_flags:
      certificates_information_indicators:
        enabled: true
        secrets:
        - cacerts
        - istio-ca-secret
      clustering:
        autodetect_secrets:
          enabled: true
          label: kiali.io/multiCluster=true
        clusters: []
        kiali_urls: []
      disabled_features: []
      istio_annotation_action: true
      istio_injection_action: true
      istio_upgrade_action: false
      ui_defaults:
        graph:
          find_options:
          - description: 'Find: slow edges (> 1s)'
            expression: rt > 1000
          - description: 'Find: unhealthy nodes'
            expression: '! healthy'
          - description: 'Find: unknown nodes'
            expression: name = unknown
          hide_options:
          - description: 'Hide: healthy nodes'
            expression: healthy
          - description: 'Hide: unknown nodes'
            expression: name = unknown
          traffic:
            grpc: requests
            http: requests
            tcp: sent
        i18n:
          language: en
          show_selector: false
        list:
          include_health: true
          include_istio_resources: true
          include_validations: true
          show_include_toggles: false
        metrics_inbound:
          aggregations:
          - display_name: Istio Network
            label: topology_istio_io_network
          - display_name: Istio Revision
            label: istio_io_rev
        metrics_outbound:
          aggregations:
          - display_name: Istio Network
            label: topology_istio_io_network
          - display_name: Istio Revision
            label: istio_io_rev
        metrics_per_refresh: 1m
        namespaces:
        - istio-system
        refresh_interval: 1m
      validations:
        ignore:
        - KIA1301
        skip_wildcard_gateway_hosts: false
    kubernetes_config:
      burst: 200
      cache_duration: 300
      cache_token_namespace_duration: 10
      excluded_workloads:
      - CronJob
      - DeploymentConfig
      - Job
      - ReplicationController
      qps: 175
    login_token:
      expiration_seconds: 86400
      signing_key: ""
    server:
      address: ""
      audit_log: true
      cors_allow_all: false
      gzip_enabled: true
      node_port: 32475
      observability:
        metrics:
          enabled: true
          port: 9090
        tracing:
          collector_type: jaeger
          collector_url: http:https://jaeger-collector.istio-system:14268/api/traces
          enabled: false
          otel:
            ca_name: ""
            protocol: http
            skip_verify: false
            tls_enabled: false
      port: 20001
      profiler:
        enabled: false
      web_fqdn: ""
      web_history_mode: ""
      web_port: ""
      web_root: ""
      web_schema: ""
    version: default
kind: List
metadata:
  resourceVersion: ""

Note that I got an error when applying the first time because I had specified the same field in both patches without --force-conflicts. I'm not sure if ArgoCD supplies this argument. This is required because we have two field managers owning fields: a "manual" user and "argocd". Two field managers cannot disagree on the value of a field. To avoid a conflict one field manager must not include the field in its patch, or the second one ot apply must use --force-conflicts

@alexzielenski
Copy link
Contributor

alexzielenski commented Jun 19, 2024

A dry-run with kubectl apply -o yaml --dry-run=server --server-side=true -f https://raw.githubusercontent.com/SebastianJ91/argocd-server-side-apply/main/2-argocd-application/kiali.yaml is previewing a merged manifest still containing the client side created fields.

Applying the manifest with kubectl apply -o yaml --server-side=true -f https://raw.githubusercontent.com/SebastianJ91/argocd-server-side-apply/main/2-argocd-application/kiali.yaml deletes previous client-side created field

It does seem like a bug for dry run and not dry run to return different results. That would require a new bug

@Mangaal
Copy link
Author

Mangaal commented Jun 19, 2024

Hi @alexzielenski,

ArgoCD applies both --force-conflicts and --field-manager argument.

In your case in the third step, you have reapplied the same resource initially created with CSA using SSA without making any changes, so it has been updated to SSA.

Even in the case of ArgoCD, before we manage the resource with ArgoCD we can do a server-side-apply without changing the YAML files, it works.

@alexzielenski
Copy link
Contributor

In your case in the third step, you have reapplied the same resource initially created with CSA using SSA without making any changes, so it has been updated to SSA.

I'm not sure what you mean here. I used SSA for both applications, and used a different YAML for each:

@Mangaal
Copy link
Author

Mangaal commented Jun 19, 2024

Correction,
In 3rd step, the resource was created with SSA instead of CSA.
I was trying to explain what I have observed...
One thing I have noticed is that if we reapply a resource initially created with CSA using SSA without making any changes before we manage the resource with ArgoCD, it still works

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

7 participants