-
Notifications
You must be signed in to change notification settings - Fork 39k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubectl --server-side apply replaces the live manifest instead of merging when migrating from clinet side apply to server side apply #125403
Comments
/sig api-machinery |
/assign @alexzielenski @apelisse |
As outlined in the documentation (merge strategy and upgrading from client-side apply to server-side apply). According to my understanding, fields previously managed by client-side apply are expected to migrate to server-side apply. In cases where there's no other controller or manager managing these fields, they might be replaced by the new configuration applied via server-side apply. If my understanding is correct, this behavior isn't a bug but rather an intended consequence of the transition process. However, this migration behavior seems to be causing issues in ArgoCD after upgrading the Kubernetes and kubectl dependencies to v0.29.6, as discussed in ArgoCD issue #17882. |
@Mangaal: GuidelinesPlease ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
This is intended behavior of kubectl. It was a bug fixed relatively recently: #112905. Mixing client-side-apply and server-side-apply is not supported. If you would like to manually manage a portion of your resource while ArgoCD manages another portion then please use SSA in your initial commands by adding |
Thanks, @alexzielenski, for the confirmation. |
Closing this issue |
It seams like A dry-run with Applying the manifest with So you never can be sure, how the resulting manifest will look. This problem is reflected in the ArgoCD diff-View. Should this be tracked here and the ticket be reopened or in another new issue? |
You must supply a different field manager note the ❯ kubectl create namespace kiali-test
namespace/kiali-test created
❯ kubectl apply -f https://raw.githubusercontent.com/SebastianJ91/argocd-server-side-apply/main/1-manual/1-kiali.io_kialis.yaml
customresourcedefinition.apiextensions.k8s.io/kialis.kiali.io created
❯ kubectl apply -f https://raw.githubusercontent.com/SebastianJ91/argocd-server-side-apply/main/1-manual/2-kiali.io_v1alpha1_kiali.yaml --server-side=true --field-manager=manual
kiali.kiali.io/kiali serverside-applied
❯ kubectl apply --server-side=true -f https://raw.githubusercontent.com/SebastianJ91/argocd-server-side-apply/main/2-argocd-application/kiali.yaml --server-side=true --field-manager=argocd
error: Apply failed with 1 conflict: conflict with "manual": .spec.istio_labels.app_label_name
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
manifest to remove references to the fields that should keep their
current managers.
* You may co-own fields by updating your manifest to match the existing
value; in this case, you'll become the manager if the other manager(s)
stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
❯ kubectl apply --server-side=true -f https://raw.githubusercontent.com/SebastianJ91/argocd-server-side-apply/main/2-argocd-application/kiali.yaml --server-side=true --field-manager=argocd --force-conflicts
kiali.kiali.io/kiali serverside-applied
❯ kubectl get kialis -n kiali-test -o yaml
apiVersion: v1
items:
- apiVersion: kiali.io/v1alpha1
kind: Kiali
metadata:
annotations:
ansible.sdk.operatorframework.io/verbosity: "1"
argocd.argoproj.io/sync-options: ServerSideApply=true,Validate=false,SkipDryRunOnMissingRessource=true
creationTimestamp: "2024-06-19T14:33:06Z"
generation: 2
name: kiali
namespace: kiali-test
resourceVersion: "52382605"
uid: 863566c8-3484-41d5-873c-5126678d9789
spec:
additional_display_details:
- annotation: kiali.io/api-spec
icon_annotation: kiali.io/api-type
title: API Documentation
api:
namespaces:
exclude:
- ^istio-operator
- ^kube-.*
- ^openshift.*
- ^ibm.*
- ^kiali-operator
include: []
label_selector_exclude: ""
label_selector_include: kiali.io/member-of=istio-system
auth:
openid:
additional_request_params:
openIdReqParam: openIdReqParamValue
allowed_domains:
- allowed.domain
api_proxy: ""
api_proxy_ca_data: ""
api_token: id_token
authentication_timeout: 300
authorization_endpoint: ""
client_id: ""
disable_rbac: false
http_proxy: ""
https_proxy: ""
insecure_skip_verify_tls: false
issuer_uri: ""
scopes:
- openid
- profile
- email
username_claim: sub
openshift:
auth_timeout: 10
client_id_prefix: kiali
strategy: ""
clustering:
autodetect_secrets:
enabled: true
label: kiali.io/multiCluster=true
clusters: []
kiali_urls: []
custom_dashboards:
- name: envoy
deployment:
accessible_namespaces:
- my-mesh.*
additional_service_yaml:
externalName: kiali.example.com
affinity:
node:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/e2e-az-name
operator: In
values:
- e2e-az1
- e2e-az2
pod:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: security
operator: In
values:
- S1
topologyKey: topology.kubernetes.io/zone
pod_anti:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: security
operator: In
values:
- S2
topologyKey: topology.kubernetes.io/zone
weight: 100
cluster_wide_access: false
configmap_annotations:
strategy.spinnaker.io/versioned: "false"
custom_secrets:
- mount: /a-custom-secret-path
name: a-custom-secret
optional: true
- csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: kiali-secretprovider
mount: /a-csi-secret-path
name: a-csi-secret
host_aliases:
- hostnames:
- foo.local
- bar.local
ip: 192.168.1.100
hpa:
api_version: ""
spec:
maxReplicas: 2
metrics:
- resource:
name: cpu
target:
averageUtilization: 50
type: Utilization
type: Resource
minReplicas: 1
image_digest: ""
image_name: ""
image_pull_policy: IfNotPresent
image_pull_secrets:
- image.pull.secret
image_version: ""
ingress:
additional_labels:
ingressAdditionalLabel: ingressAdditionalLabelValue
class_name: nginx
enabled: false
override_yaml:
metadata:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/secure-backends: "true"
spec:
rules:
- http:
paths:
- backend:
service:
name: kiali
port:
number: 20001
path: /kiali
pathType: Prefix
instance_name: kiali
logger:
log_format: text
log_level: info
sampler_rate: "1"
time_field_format: 2006-01-02T15:04:05Z07:00
namespace: istio-system
node_selector:
nodeSelector: nodeSelectorValue
pod_annotations:
podAnnotation: podAnnotationValue
pod_labels:
sidecar.istio.io/inject: "true"
priority_class_name: ""
replicas: 1
resources:
limits:
memory: 1Gi
requests:
cpu: 10m
memory: 64Mi
secret_name: kiali
security_context: {}
service_annotations:
svcAnnotation: svcAnnotationValue
service_type: NodePort
tolerations:
- effect: NoSchedule
key: example-key
operator: Exists
version_label: ""
view_only_mode: false
external_services:
custom_dashboards:
discovery_auto_threshold: 10
discovery_enabled: auto
enabled: true
is_core: false
namespace_label: namespace
prometheus:
auth:
ca_file: ""
insecure_skip_verify: false
password: ""
token: ""
type: none
use_kiali_token: false
username: ""
cache_duration: 10
cache_enabled: true
cache_expiration: 300
custom_headers:
customHeader1: customHeader1Value
health_check_url: ""
is_core: true
query_scope:
cluster: cluster-east
mesh_id: mesh-1
thanos_proxy:
enabled: false
retention_period: 7d
scrape_interval: 30s
url: ""
grafana:
auth:
ca_file: ""
insecure_skip_verify: false
password: ""
token: ""
type: none
use_kiali_token: false
username: ""
dashboards:
- name: Istio Service Dashboard
variables:
namespace: var-namespace
service: var-service
- name: Istio Workload Dashboard
variables:
namespace: var-namespace
workload: var-workload
- name: Istio Mesh Dashboard
- name: Istio Control Plane Dashboard
- name: Istio Performance Dashboard
- name: Istio Wasm Extension Dashboard
enabled: true
health_check_url: ""
in_cluster_url: ""
is_core: false
url: ""
istio:
component_status:
components:
- app_label: istiod
is_core: true
is_proxy: false
- app_label: istio-ingressgateway
is_core: true
is_proxy: true
namespace: istio-system
- app_label: istio-egressgateway
is_core: false
is_proxy: true
namespace: istio-system
enabled: true
config_map_name: istio
envoy_admin_local_port: 15000
gateway_api_classes: []
istio_api_enabled: true
istio_canary_revision:
current: 1-9-9
upgrade: 1-10-2
istio_identity_domain: svc.cluster.local
istio_injection_annotation: sidecar.istio.io/inject
istio_sidecar_annotation: sidecar.istio.io/status
istio_sidecar_injector_config_map_name: istio-sidecar-injector
istiod_deployment_name: istiod
istiod_pod_monitoring_port: 15014
root_namespace: ""
url_service_version: ""
prometheus:
auth:
ca_file: ""
insecure_skip_verify: false
password: ""
token: ""
type: none
use_kiali_token: false
username: ""
cache_duration: 10
cache_enabled: true
cache_expiration: 300
custom_headers:
customHeader1: customHeader1Value
health_check_url: ""
is_core: true
query_scope:
cluster: cluster-east
mesh_id: mesh-1
thanos_proxy:
enabled: false
retention_period: 7d
scrape_interval: 30s
url: ""
tracing:
auth:
ca_file: ""
insecure_skip_verify: false
password: ""
token: ""
type: none
use_kiali_token: false
username: ""
enabled: true
grpc_port: 9095
health_check_url: ""
in_cluster_url: ""
is_core: false
namespace_selector: true
provider: jaeger
query_scope:
cluster: cluster-east
mesh_id: mesh-1
query_timeout: 5
tempo_config:
datasource_uid: ""
org_id: ""
url: ""
use_grpc: true
whitelist_istio_system:
- jaeger-query
- istio-ingressgateway
health_config:
rate:
- kind: .*
name: .*
namespace: .*
tolerance:
- code: '[1234]00'
degraded: 5
direction: .*
failure: 10
protocol: http
identity:
cert_file: ""
private_key_file: ""
installation_tag: ""
istio_labels:
app_label_name: service.istio-io/canonical-name
injection_label_name: istio-injection
injection_label_rev: istio.io/rev
version_label_name: version
istio_namespace: ""
kiali_feature_flags:
certificates_information_indicators:
enabled: true
secrets:
- cacerts
- istio-ca-secret
clustering:
autodetect_secrets:
enabled: true
label: kiali.io/multiCluster=true
clusters: []
kiali_urls: []
disabled_features: []
istio_annotation_action: true
istio_injection_action: true
istio_upgrade_action: false
ui_defaults:
graph:
find_options:
- description: 'Find: slow edges (> 1s)'
expression: rt > 1000
- description: 'Find: unhealthy nodes'
expression: '! healthy'
- description: 'Find: unknown nodes'
expression: name = unknown
hide_options:
- description: 'Hide: healthy nodes'
expression: healthy
- description: 'Hide: unknown nodes'
expression: name = unknown
traffic:
grpc: requests
http: requests
tcp: sent
i18n:
language: en
show_selector: false
list:
include_health: true
include_istio_resources: true
include_validations: true
show_include_toggles: false
metrics_inbound:
aggregations:
- display_name: Istio Network
label: topology_istio_io_network
- display_name: Istio Revision
label: istio_io_rev
metrics_outbound:
aggregations:
- display_name: Istio Network
label: topology_istio_io_network
- display_name: Istio Revision
label: istio_io_rev
metrics_per_refresh: 1m
namespaces:
- istio-system
refresh_interval: 1m
validations:
ignore:
- KIA1301
skip_wildcard_gateway_hosts: false
kubernetes_config:
burst: 200
cache_duration: 300
cache_token_namespace_duration: 10
excluded_workloads:
- CronJob
- DeploymentConfig
- Job
- ReplicationController
qps: 175
login_token:
expiration_seconds: 86400
signing_key: ""
server:
address: ""
audit_log: true
cors_allow_all: false
gzip_enabled: true
node_port: 32475
observability:
metrics:
enabled: true
port: 9090
tracing:
collector_type: jaeger
collector_url: http:https://jaeger-collector.istio-system:14268/api/traces
enabled: false
otel:
ca_name: ""
protocol: http
skip_verify: false
tls_enabled: false
port: 20001
profiler:
enabled: false
web_fqdn: ""
web_history_mode: ""
web_port: ""
web_root: ""
web_schema: ""
version: default
kind: List
metadata:
resourceVersion: "" Note that I got an error when applying the first time because I had specified the same field in both patches without |
It does seem like a bug for dry run and not dry run to return different results. That would require a new bug |
Hi @alexzielenski, ArgoCD applies both --force-conflicts and --field-manager argument. In your case in the third step, you have reapplied the same resource initially created with CSA using SSA without making any changes, so it has been updated to SSA. Even in the case of ArgoCD, before we manage the resource with ArgoCD we can do a server-side-apply without changing the YAML files, it works. |
I'm not sure what you mean here. I used SSA for both applications, and used a different YAML for each:
|
Correction, |
What happened?
If a resource is created initially with kubectl client-side apply and tries to update it with server-side apply, the live manifest got replaced by the applied manifest and the missing fields got removed, instead of merging with kubectl version v1.29.2. This was working with a lower version(v1.22.0) of kubectl
What did you expect to happen?
I expected the live manifest and the applied manifest to be merged instead of replacing.
How can we reproduce it (as minimally and precisely as possible)?
kubectl create namespace kiali-test
to create a testing namespace.kubectl apply -f https://raw.githubusercontent.com/SebastianJ91/argocd-server-side-apply/main/1-manual/1-kiali.io_kialis.yaml
to install the CRD.kubectl apply -f https://raw.githubusercontent.com/SebastianJ91/argocd-server-side-apply/main/1-manual/2-kiali.io_v1alpha1_kiali.yaml
to install a specific Kiali instance manifest.kubectl apply --server-side=true -f https://raw.githubusercontent.com/SebastianJ91/argocd-server-side-apply/main/2-argocd-application/kiali.yaml
Anything else we need to know?
No response
Kubernetes version
Cloud provider
OS version
Install tools
Container runtime (CRI) and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)
The text was updated successfully, but these errors were encountered: