Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"conflicting fieldspecs" when trying to enable creation of antiAffinity commonLabel selectors #1013

Closed
bcbrockway opened this issue Apr 25, 2019 · 7 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@bcbrockway
Copy link

Similar to #817 but in my case I want to enable the addition of commonLabels to specific selectors. I have a StatefulSet for Elasticsearch for which I currently have to define these manually:

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch-data
spec:
  serviceName: elasticsearch-data
  replicas: 3
  updateStrategy:
    type: RollingUpdate
  selector:
    matchLabels:
      tier: logging-plane
  template:
    metadata:
      labels:
        tier: logging-plane
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app.kubernetes.io/app
                  operator: In
                  values:
                  - elasticsearch
                - key: role
                  operator: In
                  values:
                  - data
              topologyKey: kubernetes.io/hostname

The default transformer config disables auto-creation:

commonLabels:
- path: spec/template/spec/affinity/podAntiAffinity/preferredDuringSchedulingIgnoredDuringExecution/podAffinityTerm/labelSelector/matchLabels
  create: false
  group: apps
  kind: StatefulSet

I figured I could just create a new file with create set to true and then reference it in the kustomization.yaml file like so:

---
kind: Kustomization
apiVersion: kustomize.config.k8s.io/v1beta1

commonLabels:
  app.kubernetes.io/app: elasticsearch
  app: elasticsearch
  role: master

resources:
  - resources/elasticsearch-master-statefulSet.yml

configurations:
  - configurations/commonlabels.yaml

But when I try and do a kustomize build . I get the following error:

Error: AccumulateTarget: AccumulateTarget: conflicting fieldspecs
@bcbrockway bcbrockway changed the title conflicting fieldspecs "conflicting fieldspecs" when trying to enable creation of antiAffinity commonLabel selectors Apr 25, 2019
@jbrette
Copy link
Contributor

jbrette commented Jun 20, 2019

Looks like the only thing you need to do is add an empty matchLabels:. No need to override the default kustomizeconfig. I think you could close the bug

This file as resource:

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch-data
spec:
  serviceName: elasticsearch-data
  replicas: 3
  updateStrategy:
    type: RollingUpdate
  selector:
    matchLabels:
      tier: logging-plane
  template:
    metadata:
      labels:
        tier: logging-plane
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchLabels:
                matchExpressions:
                - key: app.kubernetes.io/app
                  operator: In
                  values:
                  - elasticsearch
                - key: role
                  operator: In
                  values:
                  - data
              topologyKey: kubernetes.io/hostname

produces:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: elasticsearch
    app.kubernetes.io/app: elasticsearch
    role: master
  name: elasticsearch-data
spec:
  replicas: 3
  selector:
    matchLabels:
      app: elasticsearch
      app.kubernetes.io/app: elasticsearch
      role: master
      tier: logging-plane
  serviceName: elasticsearch-data
  template:
    metadata:
      labels:
        app: elasticsearch
        app.kubernetes.io/app: elasticsearch
        role: master
        tier: logging-plane
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app.kubernetes.io/app
                  operator: In
                  values:
                  - elasticsearch
                - key: role
                  operator: In
                  values:
                  - data
                matchLabels:
                  app: elasticsearch
                  app.kubernetes.io/app: elasticsearch
                  role: master
              topologyKey: kubernetes.io/hostname
            weight: 100
  updateStrategy:
    type: RollingUpdate

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 18, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 18, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@aexvir
Copy link

aexvir commented Oct 25, 2021

I'm hitting the same issue, is the workaround of having an empty matchLabels block how this is supposed to be handled? or is this a bug that needs to be looked into? I can reproduce this issue with the following configuration

commonLabels:
  - path: spec/selector/matchLabels
    create: true
    group: policy
    kind: PodDisruptionBudget

@natasha41575 sorry for the random ping, but I've seen you active around this project
should I make a new issue for this? or can this one be reopened?

@rlaakkol
Copy link

rlaakkol commented Dec 5, 2022

@aexvir I think PodDisruptionBudgets are included in commonLabels propagation by default! I had the same issue and just removing the configuration fixed it for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants